mirror of
https://github.com/mudler/LocalAI
synced 2026-04-21 13:27:21 +00:00
Compare commits
138 commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
47efaf5b43 | ||
|
|
315b634a91 | ||
|
|
6b245299d7 | ||
|
|
677c0315c1 | ||
|
|
478522ce4d | ||
|
|
c54897ad44 | ||
|
|
8bb1e8f21f | ||
|
|
cd94a0b61a | ||
|
|
047bc48fa9 | ||
|
|
01bd8ae5d0 | ||
|
|
d9808769be | ||
|
|
5973c0a9df | ||
|
|
486b5e25a3 | ||
|
|
c66c41e8d7 | ||
|
|
02bb715c0a | ||
|
|
8ab56e2ad3 | ||
|
|
ecf85fde9e | ||
|
|
6480715a16 | ||
|
|
f683231811 | ||
|
|
960757f0e8 | ||
|
|
865fd552f5 | ||
|
|
cb77a5a4b9 | ||
|
|
60633c4dd5 | ||
|
|
9e44944cc1 | ||
|
|
372eb08dcf | ||
|
|
28091d626e | ||
|
|
cae79d9107 | ||
|
|
babbbc6ec8 | ||
|
|
3804497186 | ||
|
|
fda1c553a1 | ||
|
|
b27de08fff | ||
|
|
510f791ccc | ||
|
|
369c50a41c | ||
|
|
75a63f87d8 | ||
|
|
9cd8d7951f | ||
|
|
884bfb84c9 | ||
|
|
e94a9a8f10 | ||
|
|
054c4b4b45 | ||
|
|
6e49dba27c | ||
|
|
e463820566 | ||
|
|
8839a71c87 | ||
|
|
117f6430b8 | ||
|
|
7809c5f5d0 | ||
|
|
ad742738cb | ||
|
|
86c673fd94 | ||
|
|
c49feb546f | ||
|
|
844b0b760b | ||
|
|
55c05211d3 | ||
|
|
a90a8cf1d0 | ||
|
|
12b069f9bd | ||
|
|
48e87db400 | ||
|
|
7dbd9c056a | ||
|
|
7c5d6162f7 | ||
|
|
5837b14888 | ||
|
|
b6a68e5df4 | ||
|
|
c6dfb4acaf | ||
|
|
ec5935421c | ||
|
|
a0cbc46be9 | ||
|
|
b4e30692a2 | ||
|
|
61d34ccb11 | ||
|
|
7f88a3ba30 | ||
|
|
c4f309388e | ||
|
|
ab326a9c61 | ||
|
|
df2d25cee5 | ||
|
|
96cd561d9d | ||
|
|
08445b1b89 | ||
|
|
ad3c8c4832 | ||
|
|
6f0051301b | ||
|
|
8487058673 | ||
|
|
62862ca06b | ||
|
|
07e244d869 | ||
|
|
95efb8a562 | ||
|
|
410d100cc3 | ||
|
|
833b7e8557 | ||
|
|
87e6de1989 | ||
|
|
b361d2ddd6 | ||
|
|
1e4c4577bb | ||
|
|
98fd9d5cc6 | ||
|
|
0c725f5702 | ||
|
|
7661a4ffa5 | ||
|
|
24ad6e4be1 | ||
|
|
c0648b8836 | ||
|
|
a05c7def59 | ||
|
|
906acba8db | ||
|
|
4226ca4aee | ||
|
|
c6d5dc3374 | ||
|
|
7ce675af21 | ||
|
|
be1b8d56c9 | ||
|
|
97f087ed31 | ||
|
|
8691bbe663 | ||
|
|
7998f96f11 | ||
|
|
cada97ee46 | ||
|
|
3375ea1a2c | ||
|
|
0e7c0adee4 | ||
|
|
016da02845 | ||
|
|
daa0272f2e | ||
|
|
d67623230f | ||
|
|
0f90d17aac | ||
|
|
ea32b8953f | ||
|
|
bc7578bdb1 | ||
|
|
9ca03cf9cc | ||
|
|
151ad271f2 | ||
|
|
2865f0f8d3 | ||
|
|
6fbda277c5 | ||
|
|
7a0e6ae6d2 | ||
|
|
e4bfc42a2d | ||
|
|
7edd3ea96f | ||
|
|
b20a2f1cea | ||
|
|
8ab0744458 | ||
|
|
7c1865b307 | ||
|
|
62a674ce12 | ||
|
|
c39213443b | ||
|
|
606f462da4 | ||
|
|
5c35e85fe2 | ||
|
|
062e0d0d00 | ||
|
|
d4cd6c284f | ||
|
|
3bb8b65d31 | ||
|
|
9748a1cbc6 | ||
|
|
6bc76dda6d | ||
|
|
e1a6010874 | ||
|
|
706cf5d43c | ||
|
|
13a6ed709c | ||
|
|
85be4ff03c | ||
|
|
b0d9ce4905 | ||
|
|
7081b54c09 | ||
|
|
2b05420f95 | ||
|
|
b64347b6aa | ||
|
|
e00ce981f0 | ||
|
|
285f7d4340 | ||
|
|
ea6e850809 | ||
|
|
b7247fc148 | ||
|
|
39c6b3ed66 | ||
|
|
0e9d1a6588 | ||
|
|
510d6759fe | ||
|
|
154fa000d3 | ||
|
|
0526e60f8d | ||
|
|
db600fb5b2 | ||
|
|
9ac1bdc587 |
389 changed files with 34449 additions and 3223 deletions
|
|
@ -28,7 +28,7 @@ Add build matrix entries for each platform/GPU type you want to support. Look at
|
|||
- CUDA 13 builds: Add after other CUDA 13 builds (e.g., after `gpu-nvidia-cuda-13-chatterbox`)
|
||||
|
||||
**Additional build types you may need:**
|
||||
- ROCm/HIP: Use `build-type: 'hipblas'` with `base-image: "rocm/dev-ubuntu-24.04:6.4.4"`
|
||||
- ROCm/HIP: Use `build-type: 'hipblas'` with `base-image: "rocm/dev-ubuntu-24.04:7.2.1"`
|
||||
- Intel/SYCL: Use `build-type: 'intel'` or `build-type: 'sycl_f16'`/`sycl_f32` with `base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"`
|
||||
- L4T (ARM): Use `build-type: 'l4t'` with `platforms: 'linux/arm64'` and `runs-on: 'ubuntu-24.04-arm'`
|
||||
|
||||
|
|
@ -129,6 +129,30 @@ After adding a new backend, verify:
|
|||
- [ ] No Makefile syntax errors (check with linter)
|
||||
- [ ] Follows the same pattern as similar backends (e.g., if it's a transcription backend, follow `faster-whisper` pattern)
|
||||
|
||||
## Bundling runtime shared libraries (`package.sh`)
|
||||
|
||||
The final `Dockerfile.python` stage is `FROM scratch` — there is no system `libc`, no `apt`, no fallback library path. Only files explicitly copied from the builder stage end up in the backend image. That means any runtime `dlopen` your backend (or its Python deps) needs **must** be packaged into `${BACKEND}/lib/`.
|
||||
|
||||
Pattern:
|
||||
|
||||
1. Make sure the library is installed in the builder stage of `backend/Dockerfile.python` (add it to the top-level `apt-get install`).
|
||||
2. Drop a `package.sh` in your backend directory that copies the library — and its soname symlinks — into `$(dirname $0)/lib`. See `backend/python/vllm/package.sh` for a reference implementation that walks `/usr/lib/x86_64-linux-gnu`, `/usr/lib/aarch64-linux-gnu`, etc.
|
||||
3. `Dockerfile.python` already runs `package.sh` automatically if it exists, after `package-gpu-libs.sh`.
|
||||
4. `libbackend.sh` automatically prepends `${EDIR}/lib` to `LD_LIBRARY_PATH` at run time, so anything packaged this way is found by `dlopen`.
|
||||
|
||||
How to find missing libs: when a Python module silently fails to register torch ops or you see `AttributeError: '_OpNamespace' '...' object has no attribute '...'`, run the backend image's Python with `LD_DEBUG=libs` to see which `dlopen` failed. The filename in the error message (e.g. `libnuma.so.1`) is what you need to package.
|
||||
|
||||
To verify packaging works without trusting the host:
|
||||
|
||||
```bash
|
||||
make docker-build-<backend>
|
||||
CID=$(docker create --entrypoint=/run.sh local-ai-backend:<backend>)
|
||||
docker cp $CID:/lib /tmp/check && docker rm $CID
|
||||
ls /tmp/check # expect the bundled .so files + symlinks
|
||||
```
|
||||
|
||||
Then boot it inside a fresh `ubuntu:24.04` (which intentionally does *not* have the lib installed) to confirm it actually loads from the backend dir.
|
||||
|
||||
## 6. Example: Adding a Python Backend
|
||||
|
||||
For reference, when `moonshine` was added:
|
||||
|
|
|
|||
111
.agents/adding-gallery-models.md
Normal file
111
.agents/adding-gallery-models.md
Normal file
|
|
@ -0,0 +1,111 @@
|
|||
# Adding GGUF Models from HuggingFace to the Gallery
|
||||
|
||||
When adding a GGUF model from HuggingFace to the LocalAI model gallery, follow this guide.
|
||||
|
||||
## Gallery file
|
||||
|
||||
All models are defined in `gallery/index.yaml`. Find the appropriate section (embedding models near other embeddings, chat models near similar chat models) and add a new entry.
|
||||
|
||||
## Getting the SHA256
|
||||
|
||||
GGUF files on HuggingFace expose their SHA256 via the `x-linked-etag` HTTP header. Fetch it with:
|
||||
|
||||
```bash
|
||||
curl -sI "https://huggingface.co/<org>/<repo>/resolve/main/<filename>.gguf" | grep -i x-linked-etag
|
||||
```
|
||||
|
||||
The value (without quotes) is the SHA256 hash. Example:
|
||||
|
||||
```bash
|
||||
curl -sI "https://huggingface.co/ggml-org/embeddinggemma-300m-qat-q8_0-GGUF/resolve/main/embeddinggemma-300m-qat-Q8_0.gguf" | grep -i x-linked-etag
|
||||
# x-linked-etag: "6fa0c02a9c302be6f977521d399b4de3a46310a4f2621ee0063747881b673f67"
|
||||
```
|
||||
|
||||
**Important**: Pay attention to exact filename casing — HuggingFace filenames are case-sensitive (e.g., `Q8_0` vs `q8_0`). Check the repo's file listing to get the exact name.
|
||||
|
||||
## Entry format — Embedding models
|
||||
|
||||
Embedding models use `gallery/virtual.yaml` as the base config and set `embeddings: true`:
|
||||
|
||||
```yaml
|
||||
- name: "model-name"
|
||||
url: github:mudler/LocalAI/gallery/virtual.yaml@master
|
||||
urls:
|
||||
- https://huggingface.co/<original-model-org>/<original-model-name>
|
||||
- https://huggingface.co/<gguf-org>/<gguf-repo-name>
|
||||
description: |
|
||||
Short description of the model, its size, and capabilities.
|
||||
tags:
|
||||
- embeddings
|
||||
overrides:
|
||||
backend: llama-cpp
|
||||
embeddings: true
|
||||
parameters:
|
||||
model: <filename>.gguf
|
||||
files:
|
||||
- filename: <filename>.gguf
|
||||
uri: huggingface://<gguf-org>/<gguf-repo-name>/<filename>.gguf
|
||||
sha256: <sha256-hash>
|
||||
```
|
||||
|
||||
## Entry format — Chat/LLM models
|
||||
|
||||
Chat models typically reference a template config (e.g., `gallery/gemma.yaml`, `gallery/chatml.yaml`) that defines the prompt format. Use YAML anchors (`&name` / `*name`) if adding multiple quantization variants of the same model:
|
||||
|
||||
```yaml
|
||||
- &model-anchor
|
||||
url: "github:mudler/LocalAI/gallery/<template>.yaml@master"
|
||||
name: "model-name"
|
||||
icon: https://example.com/icon.png
|
||||
license: <license>
|
||||
urls:
|
||||
- https://huggingface.co/<org>/<model>
|
||||
- https://huggingface.co/<gguf-org>/<gguf-repo>
|
||||
description: |
|
||||
Model description.
|
||||
tags:
|
||||
- llm
|
||||
- gguf
|
||||
- gpu
|
||||
- cpu
|
||||
overrides:
|
||||
parameters:
|
||||
model: <filename>-Q4_K_M.gguf
|
||||
files:
|
||||
- filename: <filename>-Q4_K_M.gguf
|
||||
sha256: <sha256>
|
||||
uri: huggingface://<gguf-org>/<gguf-repo>/<filename>-Q4_K_M.gguf
|
||||
```
|
||||
|
||||
To add a variant (e.g., different quantization), use YAML merge:
|
||||
|
||||
```yaml
|
||||
- !!merge <<: *model-anchor
|
||||
name: "model-name-q8"
|
||||
overrides:
|
||||
parameters:
|
||||
model: <filename>-Q8_0.gguf
|
||||
files:
|
||||
- filename: <filename>-Q8_0.gguf
|
||||
sha256: <sha256>
|
||||
uri: huggingface://<gguf-org>/<gguf-repo>/<filename>-Q8_0.gguf
|
||||
```
|
||||
|
||||
## Available template configs
|
||||
|
||||
Look at existing `.yaml` files in `gallery/` to find the right prompt template for your model architecture:
|
||||
|
||||
- `gemma.yaml` — Gemma-family models (gemma, embeddinggemma, etc.)
|
||||
- `chatml.yaml` — ChatML format (many Mistral/OpenHermes models)
|
||||
- `deepseek.yaml` — DeepSeek models
|
||||
- `virtual.yaml` — Minimal base (good for embedding models that don't need chat templates)
|
||||
|
||||
## Checklist
|
||||
|
||||
1. **Find the GGUF file** on HuggingFace — note exact filename (case-sensitive)
|
||||
2. **Get the SHA256** using the `curl -sI` + `x-linked-etag` method above
|
||||
3. **Choose the right template** config from `gallery/` based on model architecture
|
||||
4. **Add the entry** to `gallery/index.yaml` near similar models
|
||||
5. **Set `embeddings: true`** if it's an embedding model
|
||||
6. **Include both URLs** — the original model page and the GGUF repo
|
||||
7. **Write a description** — mention model size, capabilities, and quantization type
|
||||
101
.agents/ai-coding-assistants.md
Normal file
101
.agents/ai-coding-assistants.md
Normal file
|
|
@ -0,0 +1,101 @@
|
|||
# AI Coding Assistants
|
||||
|
||||
This document provides guidance for AI tools and developers using AI
|
||||
assistance when contributing to LocalAI.
|
||||
|
||||
**LocalAI follows the same guidelines as the Linux kernel project for
|
||||
AI-assisted contributions.** See the upstream policy here:
|
||||
<https://docs.kernel.org/process/coding-assistants.html>
|
||||
|
||||
The rules below mirror that policy, adapted to LocalAI's license and
|
||||
project layout. If anything is unclear, the kernel document is the
|
||||
authoritative reference for intent.
|
||||
|
||||
AI tools helping with LocalAI development should follow the standard
|
||||
project development process:
|
||||
|
||||
- [CONTRIBUTING.md](../CONTRIBUTING.md) — development workflow, commit
|
||||
conventions, and PR guidelines
|
||||
- [.agents/coding-style.md](coding-style.md) — code style, editorconfig,
|
||||
logging, and documentation conventions
|
||||
- [.agents/building-and-testing.md](building-and-testing.md) — build and
|
||||
test procedures
|
||||
|
||||
## Licensing and Legal Requirements
|
||||
|
||||
All contributions must comply with LocalAI's licensing requirements:
|
||||
|
||||
- LocalAI is licensed under the **MIT License** — see the [LICENSE](../LICENSE)
|
||||
file
|
||||
- New source files should use the SPDX license identifier `MIT` where
|
||||
applicable to the file type
|
||||
- Contributions must be compatible with the MIT License and must not
|
||||
introduce code under incompatible licenses (e.g., GPL) without an
|
||||
explicit discussion with maintainers
|
||||
|
||||
## Signed-off-by and Developer Certificate of Origin
|
||||
|
||||
**AI agents MUST NOT add `Signed-off-by` tags.** Only humans can legally
|
||||
certify the Developer Certificate of Origin (DCO). The human submitter
|
||||
is responsible for:
|
||||
|
||||
- Reviewing all AI-generated code
|
||||
- Ensuring compliance with licensing requirements
|
||||
- Adding their own `Signed-off-by` tag (when the project requires DCO)
|
||||
to certify the contribution
|
||||
- Taking full responsibility for the contribution
|
||||
|
||||
AI agents MUST NOT add `Co-Authored-By` trailers for themselves either.
|
||||
A human reviewer owns the contribution; the AI's involvement is recorded
|
||||
via `Assisted-by` (see below).
|
||||
|
||||
## Attribution
|
||||
|
||||
When AI tools contribute to LocalAI development, proper attribution helps
|
||||
track the evolving role of AI in the development process. Contributions
|
||||
should include an `Assisted-by` tag in the commit message trailer in the
|
||||
following format:
|
||||
|
||||
```
|
||||
Assisted-by: AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2]
|
||||
```
|
||||
|
||||
Where:
|
||||
|
||||
- `AGENT_NAME` — name of the AI tool or framework (e.g., `Claude`,
|
||||
`Copilot`, `Cursor`)
|
||||
- `MODEL_VERSION` — specific model version used (e.g.,
|
||||
`claude-opus-4-7`, `gpt-5`)
|
||||
- `[TOOL1] [TOOL2]` — optional specialized analysis tools invoked by the
|
||||
agent (e.g., `golangci-lint`, `staticcheck`, `go vet`)
|
||||
|
||||
Basic development tools (git, go, make, editors) should **not** be listed.
|
||||
|
||||
### Example
|
||||
|
||||
```
|
||||
fix(llama-cpp): handle empty tool call arguments
|
||||
|
||||
Previously the parser panicked when the model returned a tool call with
|
||||
an empty arguments object. Fall back to an empty JSON object in that
|
||||
case so downstream consumers receive a valid payload.
|
||||
|
||||
Assisted-by: Claude:claude-opus-4-7 golangci-lint
|
||||
Signed-off-by: Jane Developer <jane@example.com>
|
||||
```
|
||||
|
||||
## Scope and Responsibility
|
||||
|
||||
Using an AI assistant does not reduce the contributor's responsibility.
|
||||
The human submitter must:
|
||||
|
||||
- Understand every line that lands in the PR
|
||||
- Verify that generated code compiles, passes tests, and follows the
|
||||
project style
|
||||
- Confirm that any referenced APIs, flags, or file paths actually exist
|
||||
in the current tree (AI models may hallucinate identifiers)
|
||||
- Not submit AI output verbatim without review
|
||||
|
||||
Reviewers may ask for clarification on any change regardless of how it
|
||||
was produced. "An AI wrote it" is not an acceptable answer to a design
|
||||
question.
|
||||
|
|
@ -10,7 +10,7 @@ Let's say the user wants to build a particular backend for a given platform. For
|
|||
- At a minimum we need to set the BUILD_TYPE, BASE_IMAGE build-args
|
||||
- Use .github/workflows/backend.yml as a reference it lists the needed args in the `include` job strategy matrix
|
||||
- l4t and cublas also requires the CUDA major and minor version
|
||||
- You can pretty print a command like `DOCKER_MAKEFLAGS=-j$(nproc --ignore=1) BUILD_TYPE=hipblas BASE_IMAGE=rocm/dev-ubuntu-24.04:6.4.4 make docker-build-coqui`
|
||||
- You can pretty print a command like `DOCKER_MAKEFLAGS=-j$(nproc --ignore=1) BUILD_TYPE=hipblas BASE_IMAGE=rocm/dev-ubuntu-24.04:7.2.1 make docker-build-coqui`
|
||||
- Unless the user specifies that they want you to run the command, then just print it because not all agent frontends handle long running jobs well and the output may overflow your context
|
||||
- The user may say they want to build AMD or ROCM instead of hipblas, or Intel instead of SYCL or NVIDIA insted of l4t or cublas. Ask for confirmation if there is ambiguity.
|
||||
- Sometimes the user may need extra parameters to be added to `docker build` (e.g. `--platform` for cross-platform builds or `--progress` to view the full logs), in which case you can generate the `docker build` command directly.
|
||||
|
|
|
|||
115
.agents/vllm-backend.md
Normal file
115
.agents/vllm-backend.md
Normal file
|
|
@ -0,0 +1,115 @@
|
|||
# Working on the vLLM Backend
|
||||
|
||||
The vLLM backend lives at `backend/python/vllm/backend.py` (async gRPC) and the multimodal variant at `backend/python/vllm-omni/backend.py` (sync gRPC). Both wrap vLLM's `AsyncLLMEngine` / `Omni` and translate the LocalAI gRPC `PredictOptions` into vLLM `SamplingParams` + outputs into `Reply.chat_deltas`.
|
||||
|
||||
This file captures the non-obvious bits — most of the bring-up was a single PR (`feat/vllm-parity`) and the things below are easy to get wrong.
|
||||
|
||||
## Tool calling and reasoning use vLLM's *native* parsers
|
||||
|
||||
Do not write regex-based tool-call extractors for vLLM. vLLM ships:
|
||||
|
||||
- `vllm.tool_parsers.ToolParserManager` — 50+ registered parsers (`hermes`, `llama3_json`, `llama4_pythonic`, `mistral`, `qwen3_xml`, `deepseek_v3`, `granite4`, `openai`, `kimi_k2`, `glm45`, …)
|
||||
- `vllm.reasoning.ReasoningParserManager` — 25+ registered parsers (`deepseek_r1`, `qwen3`, `mistral`, `gemma4`, …)
|
||||
|
||||
Both can be used standalone: instantiate with a tokenizer, call `extract_tool_calls(text, request=None)` / `extract_reasoning(text, request=None)`. The backend stores the parser *classes* on `self.tool_parser_cls` / `self.reasoning_parser_cls` at LoadModel time and instantiates them per request.
|
||||
|
||||
**Selection:** vLLM does *not* auto-detect parsers from model name — neither does the LocalAI backend. The user (or `core/config/hooks_vllm.go`) must pick one and pass it via `Options[]`:
|
||||
|
||||
```yaml
|
||||
options:
|
||||
- tool_parser:hermes
|
||||
- reasoning_parser:qwen3
|
||||
```
|
||||
|
||||
Auto-defaults for known model families live in `core/config/parser_defaults.json` and are applied:
|
||||
- at gallery import time by `core/gallery/importers/vllm.go`
|
||||
- at model load time by the `vllm` / `vllm-omni` backend hook in `core/config/hooks_vllm.go`
|
||||
|
||||
User-supplied `tool_parser:`/`reasoning_parser:` in the config wins over defaults — the hook checks for existing entries before appending.
|
||||
|
||||
**When to update `parser_defaults.json`:** any time vLLM ships a new tool or reasoning parser, or you onboard a new model family that LocalAI users will pull from HuggingFace. The file is keyed by *family pattern* matched against `normalizeModelID(cfg.Model)` (lowercase, org-prefix stripped, `_`→`-`). Patterns are checked **longest-first** — keep `qwen3.5` before `qwen3`, `llama-3.3` before `llama-3`, etc., or the wrong family wins. Add a covering test in `core/config/hooks_test.go`.
|
||||
|
||||
**Sister file — `core/config/inference_defaults.json`:** same pattern but for sampling parameters (temperature, top_p, top_k, min_p, repeat_penalty, presence_penalty). Loaded by `core/config/inference_defaults.go` and applied by `ApplyInferenceDefaults()`. The schema is `map[string]float64` only — *strings don't fit*, which is why parser defaults needed their own JSON file. The inference file is **auto-generated from unsloth** via `go generate ./core/config/` (see `core/config/gen_inference_defaults/`) — don't hand-edit it; instead update the upstream source or regenerate. Both files share `normalizeModelID()` and the longest-first pattern ordering.
|
||||
|
||||
**Constructor compatibility gotcha:** the abstract `ToolParser.__init__` accepts `tools=`, but several concrete parsers (Hermes2ProToolParser, etc.) override `__init__` and *only* accept `tokenizer`. Always:
|
||||
|
||||
```python
|
||||
try:
|
||||
tp = self.tool_parser_cls(self.tokenizer, tools=tools)
|
||||
except TypeError:
|
||||
tp = self.tool_parser_cls(self.tokenizer)
|
||||
```
|
||||
|
||||
## ChatDelta is the streaming contract
|
||||
|
||||
The Go side (`core/backend/llm.go`, `pkg/functions/chat_deltas.go`) consumes `Reply.chat_deltas` to assemble the OpenAI response. For tool calls to surface in `chat/completions`, the Python backend **must** populate `Reply.chat_deltas[].tool_calls` with `ToolCallDelta{index, id, name, arguments}`. Returning the raw `<tool_call>...</tool_call>` text in `Reply.message` is *not* enough — the Go regex fallback exists for llama.cpp, not for vllm.
|
||||
|
||||
Same story for `reasoning_content` — emit it on `ChatDelta.reasoning_content`, not as part of `content`.
|
||||
|
||||
## Message conversion to chat templates
|
||||
|
||||
`tokenizer.apply_chat_template()` expects a list of dicts, not proto Messages. The shared helper in `backend/python/common/vllm_utils.py` (`messages_to_dicts`) handles the mapping including:
|
||||
|
||||
- `tool_call_id` and `name` for `role="tool"` messages
|
||||
- `tool_calls` JSON-string field → parsed Python list for `role="assistant"`
|
||||
- `reasoning_content` for thinking models
|
||||
|
||||
Pass `tools=json.loads(request.Tools)` and (when `request.Metadata.get("enable_thinking") == "true"`) `enable_thinking=True` to `apply_chat_template`. Wrap in `try/except TypeError` because not every tokenizer template accepts those kwargs.
|
||||
|
||||
## CPU support and the SIMD/library minefield
|
||||
|
||||
vLLM publishes prebuilt CPU wheels at `https://github.com/vllm-project/vllm/releases/...`. The pin lives in `backend/python/vllm/requirements-cpu-after.txt`.
|
||||
|
||||
**Version compatibility — important:** newer vllm CPU wheels (≥ 0.15) declare `torch==2.10.0+cpu` as a hard dep, but `torch==2.10.0` only exists on the PyTorch test channel and pulls in an incompatible `torchvision`. Stay on **`vllm 0.14.1+cpu` + `torch 2.9.1+cpu`** until both upstream catch up. Bumping requires verifying torchvision/torchaudio match.
|
||||
|
||||
`requirements-cpu.txt` uses `--extra-index-url https://download.pytorch.org/whl/cpu`. `install.sh` adds `--index-strategy=unsafe-best-match` for the `cpu` profile so uv resolves transformers/vllm from PyPI while pulling torch from the PyTorch index.
|
||||
|
||||
**SIMD baseline:** the prebuilt CPU wheel is compiled with AVX-512 VNNI/BF16. On a CPU without those instructions, importing `vllm.model_executor.models.registry` SIGILLs at `_run_in_subprocess` time during model inspection. There is no runtime flag to disable it. Workarounds:
|
||||
|
||||
1. **Run on a host with the right SIMD baseline** (default — fast)
|
||||
2. **Build from source** with `FROM_SOURCE=true` env var. Plumbing exists end-to-end:
|
||||
- `install.sh` hides `requirements-cpu-after.txt`, runs `installRequirements` for the base deps, then clones vllm and `VLLM_TARGET_DEVICE=cpu uv pip install --no-deps .`
|
||||
- `backend/Dockerfile.python` declares `ARG FROM_SOURCE` + `ENV FROM_SOURCE`
|
||||
- `Makefile` `docker-build-backend` macro forwards `--build-arg FROM_SOURCE=$(FROM_SOURCE)` when set
|
||||
- Source build takes 30–50 minutes — too slow for per-PR CI but fine for local.
|
||||
|
||||
**Runtime shared libraries:** vLLM's `vllm._C` extension `dlopen`s `libnuma.so.1` at import time. If missing, the C extension silently fails and `torch.ops._C_utils.init_cpu_threads_env` is never registered → `EngineCore` crashes on `init_device` with:
|
||||
|
||||
```
|
||||
AttributeError: '_OpNamespace' '_C_utils' object has no attribute 'init_cpu_threads_env'
|
||||
```
|
||||
|
||||
`backend/python/vllm/package.sh` bundles `libnuma.so.1` and `libgomp.so.1` into `${BACKEND}/lib/`, which `libbackend.sh` adds to `LD_LIBRARY_PATH` at run time. The builder stage in `backend/Dockerfile.python` installs `libnuma1`/`libgomp1` so package.sh has something to copy. Do *not* assume the production host has these — backend images are `FROM scratch`.
|
||||
|
||||
## Backend hook system (`core/config/backend_hooks.go`)
|
||||
|
||||
Per-backend defaults that used to be hardcoded in `ModelConfig.Prepare()` now live in `core/config/hooks_*.go` files and self-register via `init()`:
|
||||
|
||||
- `hooks_llamacpp.go` → GGUF metadata parsing, context size, GPU layers, jinja template
|
||||
- `hooks_vllm.go` → tool/reasoning parser auto-selection from `parser_defaults.json`
|
||||
|
||||
Hook keys:
|
||||
- `"llama-cpp"`, `"vllm"`, `"vllm-omni"`, … — backend-specific
|
||||
- `""` — runs only when `cfg.Backend` is empty (auto-detect case)
|
||||
- `"*"` — global catch-all, runs for every backend before specific hooks
|
||||
|
||||
Multiple hooks per key are supported and run in registration order. Adding a new backend default:
|
||||
|
||||
```go
|
||||
// core/config/hooks_<backend>.go
|
||||
func init() {
|
||||
RegisterBackendHook("<backend>", myDefaults)
|
||||
}
|
||||
func myDefaults(cfg *ModelConfig, modelPath string) {
|
||||
// only fill in fields the user didn't set
|
||||
}
|
||||
```
|
||||
|
||||
## The `Messages.ToProto()` fields you need to set
|
||||
|
||||
`core/schema/message.go:ToProto()` must serialize:
|
||||
- `ToolCallID` → `proto.Message.ToolCallId` (for `role="tool"` messages — links result back to the call)
|
||||
- `Reasoning` → `proto.Message.ReasoningContent`
|
||||
- `ToolCalls` → `proto.Message.ToolCalls` (JSON-encoded string)
|
||||
|
||||
These were originally not serialized and tool-calling conversations broke silently — the C++ llama.cpp backend reads them but always got empty strings. Any new field added to `schema.Message` *and* `proto.Message` needs a matching line in `ToProto()`.
|
||||
446
.github/gallery-agent/agent.go
vendored
446
.github/gallery-agent/agent.go
vendored
|
|
@ -1,446 +0,0 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"regexp"
|
||||
"slices"
|
||||
"strings"
|
||||
|
||||
"github.com/ghodss/yaml"
|
||||
hfapi "github.com/mudler/LocalAI/pkg/huggingface-api"
|
||||
"github.com/mudler/cogito"
|
||||
"github.com/mudler/cogito/clients"
|
||||
"github.com/mudler/cogito/structures"
|
||||
"github.com/sashabaranov/go-openai/jsonschema"
|
||||
)
|
||||
|
||||
var (
|
||||
openAIModel = os.Getenv("OPENAI_MODEL")
|
||||
openAIKey = os.Getenv("OPENAI_KEY")
|
||||
openAIBaseURL = os.Getenv("OPENAI_BASE_URL")
|
||||
galleryIndexPath = os.Getenv("GALLERY_INDEX_PATH")
|
||||
//defaultclient
|
||||
llm = clients.NewOpenAILLM(openAIModel, openAIKey, openAIBaseURL)
|
||||
)
|
||||
|
||||
// cleanTextContent removes trailing spaces, tabs, and normalizes line endings
|
||||
// to prevent YAML linting issues like trailing spaces and multiple empty lines
|
||||
func cleanTextContent(text string) string {
|
||||
lines := strings.Split(text, "\n")
|
||||
var cleanedLines []string
|
||||
var prevEmpty bool
|
||||
for _, line := range lines {
|
||||
// Remove all trailing whitespace (spaces, tabs, etc.)
|
||||
trimmed := strings.TrimRight(line, " \t\r")
|
||||
// Avoid multiple consecutive empty lines
|
||||
if trimmed == "" {
|
||||
if !prevEmpty {
|
||||
cleanedLines = append(cleanedLines, "")
|
||||
}
|
||||
prevEmpty = true
|
||||
} else {
|
||||
cleanedLines = append(cleanedLines, trimmed)
|
||||
prevEmpty = false
|
||||
}
|
||||
}
|
||||
// Remove trailing empty lines from the result
|
||||
result := strings.Join(cleanedLines, "\n")
|
||||
return stripThinkingTags(strings.TrimRight(result, "\n"))
|
||||
}
|
||||
|
||||
type galleryModel struct {
|
||||
Name string `yaml:"name"`
|
||||
Urls []string `yaml:"urls"`
|
||||
}
|
||||
|
||||
// isModelExisting checks if a specific model ID exists in the gallery using text search
|
||||
func isModelExisting(modelID string) (bool, error) {
|
||||
indexPath := getGalleryIndexPath()
|
||||
content, err := os.ReadFile(indexPath)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to read %s: %w", indexPath, err)
|
||||
}
|
||||
|
||||
var galleryModels []galleryModel
|
||||
|
||||
err = yaml.Unmarshal(content, &galleryModels)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to unmarshal %s: %w", indexPath, err)
|
||||
}
|
||||
|
||||
for _, galleryModel := range galleryModels {
|
||||
if slices.Contains(galleryModel.Urls, modelID) {
|
||||
return true, nil
|
||||
}
|
||||
}
|
||||
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// filterExistingModels removes models that already exist in the gallery
|
||||
func filterExistingModels(models []ProcessedModel) ([]ProcessedModel, error) {
|
||||
var filteredModels []ProcessedModel
|
||||
for _, model := range models {
|
||||
exists, err := isModelExisting(model.ModelID)
|
||||
if err != nil {
|
||||
fmt.Printf("Error checking if model %s exists: %v, skipping\n", model.ModelID, err)
|
||||
continue
|
||||
}
|
||||
|
||||
if !exists {
|
||||
filteredModels = append(filteredModels, model)
|
||||
} else {
|
||||
fmt.Printf("Skipping existing model: %s\n", model.ModelID)
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf("Filtered out %d existing models, %d new models remaining\n",
|
||||
len(models)-len(filteredModels), len(filteredModels))
|
||||
|
||||
return filteredModels, nil
|
||||
}
|
||||
|
||||
// getGalleryIndexPath returns the gallery index file path, with a default fallback
|
||||
func getGalleryIndexPath() string {
|
||||
if galleryIndexPath != "" {
|
||||
return galleryIndexPath
|
||||
}
|
||||
return "gallery/index.yaml"
|
||||
}
|
||||
|
||||
func stripThinkingTags(content string) string {
|
||||
// Remove content between <thinking> and </thinking> (including multi-line)
|
||||
content = regexp.MustCompile(`(?s)<thinking>.*?</thinking>`).ReplaceAllString(content, "")
|
||||
// Remove content between <think> and </think> (including multi-line)
|
||||
content = regexp.MustCompile(`(?s)<think>.*?</think>`).ReplaceAllString(content, "")
|
||||
// Clean up any extra whitespace
|
||||
content = strings.TrimSpace(content)
|
||||
return content
|
||||
}
|
||||
|
||||
func getRealReadme(ctx context.Context, repository string) (string, error) {
|
||||
// Create a conversation fragment
|
||||
fragment := cogito.NewEmptyFragment().
|
||||
AddMessage("user",
|
||||
`Your task is to get a clear description of a large language model from huggingface by using the provided tool. I will share with you a repository that might be quantized, and as such probably not by the original model author. We need to get the real description of the model, and not the one that might be quantized. You will have to call the tool to get the readme more than once by figuring out from the quantized readme which is the base model readme. This is the repository: `+repository)
|
||||
|
||||
// Execute with tools
|
||||
result, err := cogito.ExecuteTools(llm, fragment,
|
||||
cogito.WithIterations(3),
|
||||
cogito.WithMaxAttempts(3),
|
||||
cogito.DisableSinkState,
|
||||
cogito.WithTools(&HFReadmeTool{client: hfapi.NewClient()}))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
result = result.AddMessage("user", "Describe the model in a clear and concise way that can be shared in a model gallery.")
|
||||
|
||||
// Get a response
|
||||
_, err = llm.Ask(ctx, result)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
content := result.LastMessage().Content
|
||||
return cleanTextContent(content), nil
|
||||
}
|
||||
|
||||
func selectMostInterestingModels(ctx context.Context, searchResult *SearchResult) ([]ProcessedModel, error) {
|
||||
|
||||
if len(searchResult.Models) == 1 {
|
||||
return searchResult.Models, nil
|
||||
}
|
||||
|
||||
// Create a conversation fragment
|
||||
fragment := cogito.NewEmptyFragment().
|
||||
AddMessage("user",
|
||||
`Your task is to analyze a list of AI models and select the most interesting ones for a model gallery. You will be given detailed information about multiple models including their metadata, file information, and README content.
|
||||
|
||||
Consider the following criteria when selecting models:
|
||||
1. Model popularity (download count)
|
||||
2. Model recency (last modified date)
|
||||
3. Model completeness (has preferred model file, README, etc.)
|
||||
4. Model uniqueness (not duplicates or very similar models)
|
||||
5. Model quality (based on README content and description)
|
||||
6. Model utility (practical applications)
|
||||
|
||||
You should select models that would be most valuable for users browsing a model gallery. Prioritize models that are:
|
||||
- Well-documented with clear READMEs
|
||||
- Recently updated
|
||||
- Popular (high download count)
|
||||
- Have the preferred quantization format available
|
||||
- Offer unique capabilities or are from reputable authors
|
||||
|
||||
Return your analysis and selection reasoning.`)
|
||||
|
||||
// Add the search results as context
|
||||
modelsInfo := fmt.Sprintf("Found %d models matching '%s' with quantization preference '%s':\n\n",
|
||||
searchResult.TotalModelsFound, searchResult.SearchTerm, searchResult.Quantization)
|
||||
|
||||
for i, model := range searchResult.Models {
|
||||
modelsInfo += fmt.Sprintf("Model %d:\n", i+1)
|
||||
modelsInfo += fmt.Sprintf(" ID: %s\n", model.ModelID)
|
||||
modelsInfo += fmt.Sprintf(" Author: %s\n", model.Author)
|
||||
modelsInfo += fmt.Sprintf(" Downloads: %d\n", model.Downloads)
|
||||
modelsInfo += fmt.Sprintf(" Last Modified: %s\n", model.LastModified)
|
||||
modelsInfo += fmt.Sprintf(" Files: %d files\n", len(model.Files))
|
||||
|
||||
if model.PreferredModelFile != nil {
|
||||
modelsInfo += fmt.Sprintf(" Preferred Model File: %s (%d bytes)\n",
|
||||
model.PreferredModelFile.Path, model.PreferredModelFile.Size)
|
||||
} else {
|
||||
modelsInfo += " No preferred model file found\n"
|
||||
}
|
||||
|
||||
if model.ReadmeContent != "" {
|
||||
modelsInfo += fmt.Sprintf(" README: %s\n", model.ReadmeContent)
|
||||
}
|
||||
|
||||
if model.ProcessingError != "" {
|
||||
modelsInfo += fmt.Sprintf(" Processing Error: %s\n", model.ProcessingError)
|
||||
}
|
||||
|
||||
modelsInfo += "\n"
|
||||
}
|
||||
|
||||
fragment = fragment.AddMessage("user", modelsInfo)
|
||||
|
||||
fragment = fragment.AddMessage("user", "Based on your analysis, select the top 5 most interesting models and provide a brief explanation for each selection. Also, create a filtered SearchResult with only the selected models. Return just a list of repositories IDs, you will later be asked to output it as a JSON array with the json tool.")
|
||||
|
||||
// Get a response
|
||||
newFragment, err := llm.Ask(ctx, fragment)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
fmt.Println(newFragment.LastMessage().Content)
|
||||
repositories := struct {
|
||||
Repositories []string `json:"repositories"`
|
||||
}{}
|
||||
|
||||
s := structures.Structure{
|
||||
Schema: jsonschema.Definition{
|
||||
Type: jsonschema.Object,
|
||||
AdditionalProperties: false,
|
||||
Properties: map[string]jsonschema.Definition{
|
||||
"repositories": {
|
||||
Type: jsonschema.Array,
|
||||
Items: &jsonschema.Definition{Type: jsonschema.String},
|
||||
Description: "The trending repositories IDs",
|
||||
},
|
||||
},
|
||||
Required: []string{"repositories"},
|
||||
},
|
||||
Object: &repositories,
|
||||
}
|
||||
|
||||
err = newFragment.ExtractStructure(ctx, llm, s)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
filteredModels := []ProcessedModel{}
|
||||
for _, m := range searchResult.Models {
|
||||
if slices.Contains(repositories.Repositories, m.ModelID) {
|
||||
filteredModels = append(filteredModels, m)
|
||||
}
|
||||
}
|
||||
|
||||
return filteredModels, nil
|
||||
}
|
||||
|
||||
// ModelMetadata represents extracted metadata from a model
|
||||
type ModelMetadata struct {
|
||||
Tags []string `json:"tags"`
|
||||
License string `json:"license"`
|
||||
}
|
||||
|
||||
// extractModelMetadata extracts tags and license from model README and documentation
|
||||
func extractModelMetadata(ctx context.Context, model ProcessedModel) ([]string, string, error) {
|
||||
// Create a conversation fragment
|
||||
fragment := cogito.NewEmptyFragment().
|
||||
AddMessage("user",
|
||||
`Your task is to extract metadata from an AI model's README and documentation. You will be provided with:
|
||||
1. Model information (ID, author, description)
|
||||
2. README content
|
||||
|
||||
You need to extract:
|
||||
1. **Tags**: An array of relevant tags that describe the model. Use common tags from the gallery such as:
|
||||
- llm, gguf, gpu, cpu, multimodal, image-to-text, text-to-text, text-to-speech, tts
|
||||
- thinking, reasoning, chat, instruction-tuned, code, vision
|
||||
- Model family names (e.g., llama, qwen, mistral, gemma) if applicable
|
||||
- Any other relevant descriptive tags
|
||||
Select 3-8 most relevant tags.
|
||||
|
||||
2. **License**: The license identifier (e.g., "apache-2.0", "mit", "llama2", "gpl-3.0", "bsd", "cc-by-4.0").
|
||||
If no license is found, return an empty string.
|
||||
|
||||
Return the extracted metadata in a structured format.`)
|
||||
|
||||
// Add model information
|
||||
modelInfo := "Model Information:\n"
|
||||
modelInfo += fmt.Sprintf(" ID: %s\n", model.ModelID)
|
||||
modelInfo += fmt.Sprintf(" Author: %s\n", model.Author)
|
||||
modelInfo += fmt.Sprintf(" Downloads: %d\n", model.Downloads)
|
||||
if model.ReadmeContent != "" {
|
||||
modelInfo += fmt.Sprintf(" README Content:\n%s\n", model.ReadmeContent)
|
||||
} else if model.ReadmeContentPreview != "" {
|
||||
modelInfo += fmt.Sprintf(" README Preview: %s\n", model.ReadmeContentPreview)
|
||||
}
|
||||
|
||||
fragment = fragment.AddMessage("user", modelInfo)
|
||||
fragment = fragment.AddMessage("user", "Extract the tags and license from the model information. Return the metadata as a JSON object with 'tags' (array of strings) and 'license' (string).")
|
||||
|
||||
// Get a response
|
||||
newFragment, err := llm.Ask(ctx, fragment)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
// Extract structured metadata
|
||||
metadata := ModelMetadata{}
|
||||
|
||||
s := structures.Structure{
|
||||
Schema: jsonschema.Definition{
|
||||
Type: jsonschema.Object,
|
||||
AdditionalProperties: false,
|
||||
Properties: map[string]jsonschema.Definition{
|
||||
"tags": {
|
||||
Type: jsonschema.Array,
|
||||
Items: &jsonschema.Definition{Type: jsonschema.String},
|
||||
Description: "Array of relevant tags describing the model",
|
||||
},
|
||||
"license": {
|
||||
Type: jsonschema.String,
|
||||
Description: "License identifier (e.g., apache-2.0, mit, llama2). Empty string if not found.",
|
||||
},
|
||||
},
|
||||
Required: []string{"tags", "license"},
|
||||
},
|
||||
Object: &metadata,
|
||||
}
|
||||
|
||||
err = newFragment.ExtractStructure(ctx, llm, s)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
return metadata.Tags, metadata.License, nil
|
||||
}
|
||||
|
||||
// extractIconFromReadme scans the README content for image URLs and returns the first suitable icon URL found
|
||||
func extractIconFromReadme(readmeContent string) string {
|
||||
if readmeContent == "" {
|
||||
return ""
|
||||
}
|
||||
|
||||
// Regular expressions to match image URLs in various formats (case-insensitive)
|
||||
// Match markdown image syntax:  - case insensitive extensions
|
||||
markdownImageRegex := regexp.MustCompile(`(?i)!\[[^\]]*\]\(([^)]+\.(png|jpg|jpeg|svg|webp|gif))\)`)
|
||||
// Match HTML img tags: <img src="url">
|
||||
htmlImageRegex := regexp.MustCompile(`(?i)<img[^>]+src=["']([^"']+\.(png|jpg|jpeg|svg|webp|gif))["']`)
|
||||
// Match plain URLs ending with image extensions
|
||||
plainImageRegex := regexp.MustCompile(`(?i)https?://[^\s<>"']+\.(png|jpg|jpeg|svg|webp|gif)`)
|
||||
|
||||
// Try markdown format first
|
||||
matches := markdownImageRegex.FindStringSubmatch(readmeContent)
|
||||
if len(matches) > 1 && matches[1] != "" {
|
||||
url := strings.TrimSpace(matches[1])
|
||||
// Prefer HuggingFace CDN URLs or absolute URLs
|
||||
if strings.HasPrefix(strings.ToLower(url), "http") {
|
||||
return url
|
||||
}
|
||||
}
|
||||
|
||||
// Try HTML img tags
|
||||
matches = htmlImageRegex.FindStringSubmatch(readmeContent)
|
||||
if len(matches) > 1 && matches[1] != "" {
|
||||
url := strings.TrimSpace(matches[1])
|
||||
if strings.HasPrefix(strings.ToLower(url), "http") {
|
||||
return url
|
||||
}
|
||||
}
|
||||
|
||||
// Try plain URLs
|
||||
matches = plainImageRegex.FindStringSubmatch(readmeContent)
|
||||
if len(matches) > 0 {
|
||||
url := strings.TrimSpace(matches[0])
|
||||
if strings.HasPrefix(strings.ToLower(url), "http") {
|
||||
return url
|
||||
}
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
// getHuggingFaceAvatarURL attempts to get the HuggingFace avatar URL for a user
|
||||
func getHuggingFaceAvatarURL(author string) string {
|
||||
if author == "" {
|
||||
return ""
|
||||
}
|
||||
|
||||
// Try to fetch user info from HuggingFace API
|
||||
// HuggingFace API endpoint: https://huggingface.co/api/users/{username}
|
||||
baseURL := "https://huggingface.co"
|
||||
userURL := fmt.Sprintf("%s/api/users/%s", baseURL, author)
|
||||
|
||||
req, err := http.NewRequest("GET", userURL, nil)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
client := &http.Client{}
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return ""
|
||||
}
|
||||
|
||||
// Parse the response to get avatar URL
|
||||
var userInfo map[string]any
|
||||
body, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
if err := json.Unmarshal(body, &userInfo); err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
// Try to extract avatar URL from response
|
||||
if avatar, ok := userInfo["avatarUrl"].(string); ok && avatar != "" {
|
||||
return avatar
|
||||
}
|
||||
if avatar, ok := userInfo["avatar"].(string); ok && avatar != "" {
|
||||
return avatar
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
// extractModelIcon extracts icon URL from README or falls back to HuggingFace avatar
|
||||
func extractModelIcon(model ProcessedModel) string {
|
||||
// First, try to extract icon from README
|
||||
if icon := extractIconFromReadme(model.ReadmeContent); icon != "" {
|
||||
return icon
|
||||
}
|
||||
|
||||
// Fallback: Try to get HuggingFace user avatar
|
||||
if model.Author != "" {
|
||||
if avatar := getHuggingFaceAvatarURL(model.Author); avatar != "" {
|
||||
return avatar
|
||||
}
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
2
.github/gallery-agent/gallery.go
vendored
2
.github/gallery-agent/gallery.go
vendored
|
|
@ -7,8 +7,8 @@ import (
|
|||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/ghodss/yaml"
|
||||
"github.com/mudler/LocalAI/core/gallery/importers"
|
||||
"sigs.k8s.io/yaml"
|
||||
)
|
||||
|
||||
func formatTextContent(text string) string {
|
||||
|
|
|
|||
301
.github/gallery-agent/helpers.go
vendored
Normal file
301
.github/gallery-agent/helpers.go
vendored
Normal file
|
|
@ -0,0 +1,301 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
hfapi "github.com/mudler/LocalAI/pkg/huggingface-api"
|
||||
"sigs.k8s.io/yaml"
|
||||
)
|
||||
|
||||
var galleryIndexPath = os.Getenv("GALLERY_INDEX_PATH")
|
||||
|
||||
// getGalleryIndexPath returns the gallery index file path, with a default fallback
|
||||
func getGalleryIndexPath() string {
|
||||
if galleryIndexPath != "" {
|
||||
return galleryIndexPath
|
||||
}
|
||||
return "gallery/index.yaml"
|
||||
}
|
||||
|
||||
type galleryModel struct {
|
||||
Name string `yaml:"name"`
|
||||
Urls []string `yaml:"urls"`
|
||||
}
|
||||
|
||||
// loadGalleryURLSet parses gallery/index.yaml once and returns the set of
|
||||
// HuggingFace model URLs already present in the gallery.
|
||||
func loadGalleryURLSet() (map[string]struct{}, error) {
|
||||
indexPath := getGalleryIndexPath()
|
||||
content, err := os.ReadFile(indexPath)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read %s: %w", indexPath, err)
|
||||
}
|
||||
|
||||
var galleryModels []galleryModel
|
||||
if err := yaml.Unmarshal(content, &galleryModels); err != nil {
|
||||
return nil, fmt.Errorf("failed to unmarshal %s: %w", indexPath, err)
|
||||
}
|
||||
|
||||
set := make(map[string]struct{}, len(galleryModels))
|
||||
for _, gm := range galleryModels {
|
||||
for _, u := range gm.Urls {
|
||||
set[u] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
// Also skip URLs already proposed in open (unmerged) gallery-agent PRs.
|
||||
// The workflow injects these via EXTRA_SKIP_URLS so we don't keep
|
||||
// re-proposing the same model every run while a PR is waiting to merge.
|
||||
for _, line := range strings.FieldsFunc(os.Getenv("EXTRA_SKIP_URLS"), func(r rune) bool {
|
||||
return r == '\n' || r == ',' || r == ' '
|
||||
}) {
|
||||
u := strings.TrimSpace(line)
|
||||
if u != "" {
|
||||
set[u] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
return set, nil
|
||||
}
|
||||
|
||||
// modelAlreadyInGallery checks whether a HuggingFace model repo is already
|
||||
// referenced in the gallery URL set.
|
||||
func modelAlreadyInGallery(set map[string]struct{}, modelID string) bool {
|
||||
_, ok := set["https://huggingface.co/"+modelID]
|
||||
return ok
|
||||
}
|
||||
|
||||
// baseModelFromTags returns the first `base_model:<repo>` value found in the
|
||||
// tag list, or "" if none is present. HuggingFace surfaces the base model
|
||||
// declared in the model card's YAML frontmatter as such a tag.
|
||||
func baseModelFromTags(tags []string) string {
|
||||
for _, t := range tags {
|
||||
if strings.HasPrefix(t, "base_model:") {
|
||||
return strings.TrimPrefix(t, "base_model:")
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// licenseFromTags returns the `license:<id>` value from the tag list, or "".
|
||||
func licenseFromTags(tags []string) string {
|
||||
for _, t := range tags {
|
||||
if strings.HasPrefix(t, "license:") {
|
||||
return strings.TrimPrefix(t, "license:")
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// curatedTags produces the gallery tag list from HuggingFace's raw tag set.
|
||||
// Always includes llm + gguf, then adds whitelisted family / capability
|
||||
// markers when they appear in the HF tag list.
|
||||
func curatedTags(hfTags []string) []string {
|
||||
whitelist := []string{
|
||||
"gpu", "cpu",
|
||||
"llama", "mistral", "mixtral", "qwen", "qwen2", "qwen3",
|
||||
"gemma", "gemma2", "gemma3", "phi", "phi3", "phi4",
|
||||
"deepseek", "yi", "falcon", "command-r",
|
||||
"vision", "multimodal", "code", "chat",
|
||||
"instruction-tuned", "reasoning", "thinking",
|
||||
}
|
||||
seen := map[string]struct{}{}
|
||||
out := []string{"llm", "gguf"}
|
||||
seen["llm"] = struct{}{}
|
||||
seen["gguf"] = struct{}{}
|
||||
|
||||
hfSet := map[string]struct{}{}
|
||||
for _, t := range hfTags {
|
||||
hfSet[strings.ToLower(t)] = struct{}{}
|
||||
}
|
||||
for _, w := range whitelist {
|
||||
if _, ok := hfSet[w]; ok {
|
||||
if _, dup := seen[w]; !dup {
|
||||
out = append(out, w)
|
||||
seen[w] = struct{}{}
|
||||
}
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// resolveReadme fetches a description-quality README for a (possibly
|
||||
// quantized) repo: if a `base_model:` tag is present, fetch the base repo's
|
||||
// README; otherwise fall back to the repo's own README.
|
||||
func resolveReadme(client *hfapi.Client, modelID string, hfTags []string) (string, error) {
|
||||
if base := baseModelFromTags(hfTags); base != "" && base != modelID {
|
||||
if content, err := client.GetReadmeContent(base, "README.md"); err == nil && strings.TrimSpace(content) != "" {
|
||||
return cleanTextContent(content), nil
|
||||
}
|
||||
}
|
||||
content, err := client.GetReadmeContent(modelID, "README.md")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return cleanTextContent(content), nil
|
||||
}
|
||||
|
||||
// extractDescription turns a raw HuggingFace README into a concise plain-text
|
||||
// description suitable for embedding in gallery/index.yaml: strips YAML
|
||||
// frontmatter, HTML tags/comments, markdown images, link URLs (keeping the
|
||||
// link text), markdown tables, and then truncates at a paragraph boundary
|
||||
// around ~1200 characters. Raw README should still be used for icon
|
||||
// extraction — call this only for the `description:` field.
|
||||
func extractDescription(readme string) string {
|
||||
s := readme
|
||||
|
||||
// Strip leading YAML frontmatter: `---\n...\n---\n` at start of file.
|
||||
if strings.HasPrefix(strings.TrimLeft(s, " \t\n"), "---") {
|
||||
trimmed := strings.TrimLeft(s, " \t\n")
|
||||
rest := strings.TrimPrefix(trimmed, "---")
|
||||
if idx := strings.Index(rest, "\n---"); idx >= 0 {
|
||||
after := rest[idx+len("\n---"):]
|
||||
after = strings.TrimPrefix(after, "\n")
|
||||
s = after
|
||||
}
|
||||
}
|
||||
|
||||
// Strip HTML comments and tags.
|
||||
s = regexp.MustCompile(`(?s)<!--.*?-->`).ReplaceAllString(s, "")
|
||||
s = regexp.MustCompile(`(?is)<[^>]+>`).ReplaceAllString(s, "")
|
||||
|
||||
// Strip markdown images entirely.
|
||||
s = regexp.MustCompile(`!\[[^\]]*\]\([^)]*\)`).ReplaceAllString(s, "")
|
||||
// Replace markdown links `[text](url)` with just `text`.
|
||||
s = regexp.MustCompile(`\[([^\]]+)\]\([^)]+\)`).ReplaceAllString(s, "$1")
|
||||
|
||||
// Drop table lines and horizontal rules, and flatten all leading
|
||||
// whitespace: generateYAMLEntry embeds this under a `description: |`
|
||||
// literal block whose indentation is set by the first non-empty line.
|
||||
// If any line has extra leading whitespace (e.g. from an indented
|
||||
// `<p align="center">` block in the original README), YAML will pick
|
||||
// that up as the block's indent and every later line at a smaller
|
||||
// indent blows the block scalar. Stripping leading whitespace here
|
||||
// guarantees uniform 4-space indentation after formatTextContent runs.
|
||||
var kept []string
|
||||
for _, line := range strings.Split(s, "\n") {
|
||||
t := strings.TrimLeft(line, " \t")
|
||||
ts := strings.TrimSpace(t)
|
||||
if strings.HasPrefix(ts, "|") {
|
||||
continue
|
||||
}
|
||||
if strings.HasPrefix(ts, ":--") || strings.HasPrefix(ts, "---") || strings.HasPrefix(ts, "===") {
|
||||
continue
|
||||
}
|
||||
kept = append(kept, t)
|
||||
}
|
||||
s = strings.Join(kept, "\n")
|
||||
|
||||
// Normalise whitespace and drop any leading blank lines so the literal
|
||||
// block in YAML doesn't start with a blank first line (which would
|
||||
// break the indentation detector the same way).
|
||||
s = cleanTextContent(s)
|
||||
s = strings.TrimLeft(s, " \t\n")
|
||||
|
||||
// Truncate at a paragraph boundary around maxLen chars.
|
||||
const maxLen = 1200
|
||||
if len(s) > maxLen {
|
||||
cut := strings.LastIndex(s[:maxLen], "\n\n")
|
||||
if cut < maxLen/3 {
|
||||
cut = maxLen
|
||||
}
|
||||
s = strings.TrimRight(s[:cut], " \t\n") + "\n\n..."
|
||||
}
|
||||
|
||||
return s
|
||||
}
|
||||
|
||||
// cleanTextContent removes trailing spaces/tabs and collapses multiple empty
|
||||
// lines so README content embeds cleanly into YAML without lint noise.
|
||||
func cleanTextContent(text string) string {
|
||||
lines := strings.Split(text, "\n")
|
||||
var cleaned []string
|
||||
var prevEmpty bool
|
||||
for _, line := range lines {
|
||||
trimmed := strings.TrimRight(line, " \t\r")
|
||||
if trimmed == "" {
|
||||
if !prevEmpty {
|
||||
cleaned = append(cleaned, "")
|
||||
}
|
||||
prevEmpty = true
|
||||
} else {
|
||||
cleaned = append(cleaned, trimmed)
|
||||
prevEmpty = false
|
||||
}
|
||||
}
|
||||
return strings.TrimRight(strings.Join(cleaned, "\n"), "\n")
|
||||
}
|
||||
|
||||
// extractIconFromReadme scans README content for an image URL usable as a
|
||||
// gallery entry icon.
|
||||
func extractIconFromReadme(readmeContent string) string {
|
||||
if readmeContent == "" {
|
||||
return ""
|
||||
}
|
||||
|
||||
markdownImageRegex := regexp.MustCompile(`(?i)!\[[^\]]*\]\(([^)]+\.(png|jpg|jpeg|svg|webp|gif))\)`)
|
||||
htmlImageRegex := regexp.MustCompile(`(?i)<img[^>]+src=["']([^"']+\.(png|jpg|jpeg|svg|webp|gif))["']`)
|
||||
plainImageRegex := regexp.MustCompile(`(?i)https?://[^\s<>"']+\.(png|jpg|jpeg|svg|webp|gif)`)
|
||||
|
||||
if m := markdownImageRegex.FindStringSubmatch(readmeContent); len(m) > 1 && strings.HasPrefix(strings.ToLower(m[1]), "http") {
|
||||
return strings.TrimSpace(m[1])
|
||||
}
|
||||
if m := htmlImageRegex.FindStringSubmatch(readmeContent); len(m) > 1 && strings.HasPrefix(strings.ToLower(m[1]), "http") {
|
||||
return strings.TrimSpace(m[1])
|
||||
}
|
||||
if m := plainImageRegex.FindStringSubmatch(readmeContent); len(m) > 0 && strings.HasPrefix(strings.ToLower(m[0]), "http") {
|
||||
return strings.TrimSpace(m[0])
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// getHuggingFaceAvatarURL returns the HF avatar URL for a user, or "".
|
||||
func getHuggingFaceAvatarURL(author string) string {
|
||||
if author == "" {
|
||||
return ""
|
||||
}
|
||||
userURL := fmt.Sprintf("https://huggingface.co/api/users/%s/overview", author)
|
||||
resp, err := http.Get(userURL)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return ""
|
||||
}
|
||||
body, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
var info map[string]any
|
||||
if err := json.Unmarshal(body, &info); err != nil {
|
||||
return ""
|
||||
}
|
||||
if v, ok := info["avatarUrl"].(string); ok && v != "" {
|
||||
return v
|
||||
}
|
||||
if v, ok := info["avatar"].(string); ok && v != "" {
|
||||
return v
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// extractModelIcon extracts an icon URL from the README, falling back to the
|
||||
// HuggingFace user avatar.
|
||||
func extractModelIcon(model ProcessedModel) string {
|
||||
if icon := extractIconFromReadme(model.ReadmeContent); icon != "" {
|
||||
return icon
|
||||
}
|
||||
if model.Author != "" {
|
||||
if avatar := getHuggingFaceAvatarURL(model.Author); avatar != "" {
|
||||
return avatar
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
409
.github/gallery-agent/main.go
vendored
409
.github/gallery-agent/main.go
vendored
|
|
@ -6,7 +6,6 @@ import (
|
|||
"fmt"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
hfapi "github.com/mudler/LocalAI/pkg/huggingface-api"
|
||||
|
|
@ -39,16 +38,6 @@ type ProcessedModel struct {
|
|||
Icon string `json:"icon,omitempty"`
|
||||
}
|
||||
|
||||
// SearchResult represents the complete result of searching and processing models
|
||||
type SearchResult struct {
|
||||
SearchTerm string `json:"search_term"`
|
||||
Limit int `json:"limit"`
|
||||
Quantization string `json:"quantization"`
|
||||
TotalModelsFound int `json:"total_models_found"`
|
||||
Models []ProcessedModel `json:"models"`
|
||||
FormattedOutput string `json:"formatted_output"`
|
||||
}
|
||||
|
||||
// AddedModelSummary represents a summary of models added to the gallery
|
||||
type AddedModelSummary struct {
|
||||
SearchTerm string `json:"search_term"`
|
||||
|
|
@ -63,19 +52,16 @@ type AddedModelSummary struct {
|
|||
func main() {
|
||||
startTime := time.Now()
|
||||
|
||||
// Check for synthetic mode
|
||||
syntheticMode := os.Getenv("SYNTHETIC_MODE")
|
||||
if syntheticMode == "true" || syntheticMode == "1" {
|
||||
// Synthetic mode for local testing
|
||||
if sm := os.Getenv("SYNTHETIC_MODE"); sm == "true" || sm == "1" {
|
||||
fmt.Println("Running in SYNTHETIC MODE - generating random test data")
|
||||
err := runSyntheticMode()
|
||||
if err != nil {
|
||||
if err := runSyntheticMode(); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error in synthetic mode: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Get configuration from environment variables
|
||||
searchTerm := os.Getenv("SEARCH_TERM")
|
||||
if searchTerm == "" {
|
||||
searchTerm = "GGUF"
|
||||
|
|
@ -83,7 +69,7 @@ func main() {
|
|||
|
||||
limitStr := os.Getenv("LIMIT")
|
||||
if limitStr == "" {
|
||||
limitStr = "5"
|
||||
limitStr = "15"
|
||||
}
|
||||
limit, err := strconv.Atoi(limitStr)
|
||||
if err != nil {
|
||||
|
|
@ -92,287 +78,197 @@ func main() {
|
|||
}
|
||||
|
||||
quantization := os.Getenv("QUANTIZATION")
|
||||
|
||||
maxModels := os.Getenv("MAX_MODELS")
|
||||
if maxModels == "" {
|
||||
maxModels = "1"
|
||||
if quantization == "" {
|
||||
quantization = "Q4_K_M"
|
||||
}
|
||||
maxModelsInt, err := strconv.Atoi(maxModels)
|
||||
|
||||
maxModelsStr := os.Getenv("MAX_MODELS")
|
||||
if maxModelsStr == "" {
|
||||
maxModelsStr = "1"
|
||||
}
|
||||
maxModels, err := strconv.Atoi(maxModelsStr)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error parsing MAX_MODELS: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Print configuration
|
||||
fmt.Printf("Gallery Agent Configuration:\n")
|
||||
fmt.Printf(" Search Term: %s\n", searchTerm)
|
||||
fmt.Printf(" Limit: %d\n", limit)
|
||||
fmt.Printf(" Quantization: %s\n", quantization)
|
||||
fmt.Printf(" Max Models to Add: %d\n", maxModelsInt)
|
||||
fmt.Printf(" Gallery Index Path: %s\n", os.Getenv("GALLERY_INDEX_PATH"))
|
||||
fmt.Printf(" Max Models to Add: %d\n", maxModels)
|
||||
fmt.Printf(" Gallery Index Path: %s\n", getGalleryIndexPath())
|
||||
fmt.Println()
|
||||
|
||||
result, err := searchAndProcessModels(searchTerm, limit, quantization)
|
||||
// Phase 1: load current gallery and query HuggingFace.
|
||||
gallerySet, err := loadGalleryURLSet()
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||
fmt.Fprintf(os.Stderr, "Error loading gallery index: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
fmt.Printf("Loaded %d existing gallery entries\n", len(gallerySet))
|
||||
|
||||
fmt.Println(result.FormattedOutput)
|
||||
var models []ProcessedModel
|
||||
|
||||
if len(result.Models) > 1 {
|
||||
fmt.Println("More than one model found (", len(result.Models), "), using AI agent to select the most interesting models")
|
||||
for _, model := range result.Models {
|
||||
fmt.Println("Model: ", model.ModelID)
|
||||
}
|
||||
// Use AI agent to select the most interesting models
|
||||
fmt.Println("Using AI agent to select the most interesting models...")
|
||||
models, err = selectMostInterestingModels(context.Background(), result)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error in model selection: %v\n", err)
|
||||
// Continue with original result if selection fails
|
||||
models = result.Models
|
||||
}
|
||||
} else if len(result.Models) == 1 {
|
||||
models = result.Models
|
||||
fmt.Println("Only one model found, using it directly")
|
||||
}
|
||||
|
||||
fmt.Print(models)
|
||||
|
||||
// Filter out models that already exist in the gallery
|
||||
fmt.Println("Filtering out existing models...")
|
||||
models, err = filterExistingModels(models)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error filtering existing models: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Limit to maxModelsInt after filtering
|
||||
if len(models) > maxModelsInt {
|
||||
models = models[:maxModelsInt]
|
||||
}
|
||||
|
||||
// Track added models for summary
|
||||
var addedModelIDs []string
|
||||
var addedModelURLs []string
|
||||
|
||||
// Generate YAML entries and append to gallery/index.yaml
|
||||
if len(models) > 0 {
|
||||
for _, model := range models {
|
||||
addedModelIDs = append(addedModelIDs, model.ModelID)
|
||||
// Generate Hugging Face URL for the model
|
||||
modelURL := fmt.Sprintf("https://huggingface.co/%s", model.ModelID)
|
||||
addedModelURLs = append(addedModelURLs, modelURL)
|
||||
}
|
||||
fmt.Println("Generating YAML entries for selected models...")
|
||||
err = generateYAMLForModels(context.Background(), models, quantization)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error generating YAML entries: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
} else {
|
||||
fmt.Println("No new models to add to the gallery.")
|
||||
}
|
||||
|
||||
// Create and write summary
|
||||
processingTime := time.Since(startTime).String()
|
||||
summary := AddedModelSummary{
|
||||
SearchTerm: searchTerm,
|
||||
TotalFound: result.TotalModelsFound,
|
||||
ModelsAdded: len(addedModelIDs),
|
||||
AddedModelIDs: addedModelIDs,
|
||||
AddedModelURLs: addedModelURLs,
|
||||
Quantization: quantization,
|
||||
ProcessingTime: processingTime,
|
||||
}
|
||||
|
||||
// Write summary to file
|
||||
summaryData, err := json.MarshalIndent(summary, "", " ")
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error marshaling summary: %v\n", err)
|
||||
} else {
|
||||
err = os.WriteFile("gallery-agent-summary.json", summaryData, 0644)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error writing summary file: %v\n", err)
|
||||
} else {
|
||||
fmt.Printf("Summary written to gallery-agent-summary.json\n")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func searchAndProcessModels(searchTerm string, limit int, quantization string) (*SearchResult, error) {
|
||||
client := hfapi.NewClient()
|
||||
var outputBuilder strings.Builder
|
||||
|
||||
fmt.Println("Searching for models...")
|
||||
// Initialize the result struct
|
||||
result := &SearchResult{
|
||||
SearchTerm: searchTerm,
|
||||
Limit: limit,
|
||||
Quantization: quantization,
|
||||
Models: []ProcessedModel{},
|
||||
}
|
||||
|
||||
models, err := client.GetLatest(searchTerm, limit)
|
||||
fmt.Println("Searching for trending models on HuggingFace...")
|
||||
rawModels, err := client.GetTrending(searchTerm, limit)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to fetch models: %w", err)
|
||||
fmt.Fprintf(os.Stderr, "Error fetching models: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
fmt.Printf("Found %d trending models matching %q\n", len(rawModels), searchTerm)
|
||||
totalFound := len(rawModels)
|
||||
|
||||
// Phase 2: drop anything already in the gallery *before* any expensive
|
||||
// per-model work (GetModelDetails, README fetches, icon lookups).
|
||||
fresh := rawModels[:0]
|
||||
for _, m := range rawModels {
|
||||
if modelAlreadyInGallery(gallerySet, m.ModelID) {
|
||||
fmt.Printf("Skipping existing model: %s\n", m.ModelID)
|
||||
continue
|
||||
}
|
||||
fresh = append(fresh, m)
|
||||
}
|
||||
fmt.Printf("%d candidates after gallery dedup\n", len(fresh))
|
||||
|
||||
// Phase 3: HuggingFace already returned these in trendingScore order —
|
||||
// just cap to MAX_MODELS.
|
||||
if len(fresh) > maxModels {
|
||||
fresh = fresh[:maxModels]
|
||||
}
|
||||
if len(fresh) == 0 {
|
||||
fmt.Println("No new models to add to the gallery.")
|
||||
writeSummary(AddedModelSummary{
|
||||
SearchTerm: searchTerm,
|
||||
TotalFound: totalFound,
|
||||
ModelsAdded: 0,
|
||||
Quantization: quantization,
|
||||
ProcessingTime: time.Since(startTime).String(),
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
fmt.Println("Models found:", len(models))
|
||||
result.TotalModelsFound = len(models)
|
||||
// Phase 4: fetch details and build ProcessedModel entries for survivors.
|
||||
var processed []ProcessedModel
|
||||
quantPrefs := []string{quantization, "Q4_K_M", "Q4_K_S", "Q3_K_M", "Q2_K", "Q8_0"}
|
||||
for _, m := range fresh {
|
||||
fmt.Printf("Processing model: %s (downloads=%d)\n", m.ModelID, m.Downloads)
|
||||
|
||||
if len(models) == 0 {
|
||||
outputBuilder.WriteString("No models found.\n")
|
||||
result.FormattedOutput = outputBuilder.String()
|
||||
return result, nil
|
||||
}
|
||||
|
||||
outputBuilder.WriteString(fmt.Sprintf("Found %d models matching '%s':\n\n", len(models), searchTerm))
|
||||
|
||||
// Process each model
|
||||
for i, model := range models {
|
||||
outputBuilder.WriteString(fmt.Sprintf("%d. Processing Model: %s\n", i+1, model.ModelID))
|
||||
outputBuilder.WriteString(fmt.Sprintf(" Author: %s\n", model.Author))
|
||||
outputBuilder.WriteString(fmt.Sprintf(" Downloads: %d\n", model.Downloads))
|
||||
outputBuilder.WriteString(fmt.Sprintf(" Last Modified: %s\n", model.LastModified))
|
||||
|
||||
// Initialize processed model struct
|
||||
processedModel := ProcessedModel{
|
||||
ModelID: model.ModelID,
|
||||
Author: model.Author,
|
||||
Downloads: model.Downloads,
|
||||
LastModified: model.LastModified,
|
||||
QuantizationPreferences: []string{quantization, "Q4_K_M", "Q4_K_S", "Q3_K_M", "Q2_K"},
|
||||
pm := ProcessedModel{
|
||||
ModelID: m.ModelID,
|
||||
Author: m.Author,
|
||||
Downloads: m.Downloads,
|
||||
LastModified: m.LastModified,
|
||||
QuantizationPreferences: quantPrefs,
|
||||
}
|
||||
|
||||
// Get detailed model information
|
||||
details, err := client.GetModelDetails(model.ModelID)
|
||||
details, err := client.GetModelDetails(m.ModelID)
|
||||
if err != nil {
|
||||
errorMsg := fmt.Sprintf(" Error getting model details: %v\n", err)
|
||||
outputBuilder.WriteString(errorMsg)
|
||||
processedModel.ProcessingError = err.Error()
|
||||
result.Models = append(result.Models, processedModel)
|
||||
fmt.Printf(" Error getting model details: %v (skipping)\n", err)
|
||||
continue
|
||||
}
|
||||
|
||||
// Define quantization preferences (in order of preference)
|
||||
quantizationPreferences := []string{quantization, "Q4_K_M", "Q4_K_S", "Q3_K_M", "Q2_K"}
|
||||
preferred := hfapi.FindPreferredModelFile(details.Files, quantPrefs)
|
||||
if preferred == nil {
|
||||
fmt.Printf(" No GGUF file matching %v — skipping\n", quantPrefs)
|
||||
continue
|
||||
}
|
||||
|
||||
// Find preferred model file
|
||||
preferredModelFile := hfapi.FindPreferredModelFile(details.Files, quantizationPreferences)
|
||||
|
||||
// Process files
|
||||
processedFiles := make([]ProcessedModelFile, len(details.Files))
|
||||
for j, file := range details.Files {
|
||||
pm.Files = make([]ProcessedModelFile, len(details.Files))
|
||||
for j, f := range details.Files {
|
||||
fileType := "other"
|
||||
if file.IsReadme {
|
||||
if f.IsReadme {
|
||||
fileType = "readme"
|
||||
} else if preferredModelFile != nil && file.Path == preferredModelFile.Path {
|
||||
} else if f.Path == preferred.Path {
|
||||
fileType = "model"
|
||||
}
|
||||
|
||||
processedFiles[j] = ProcessedModelFile{
|
||||
Path: file.Path,
|
||||
Size: file.Size,
|
||||
SHA256: file.SHA256,
|
||||
IsReadme: file.IsReadme,
|
||||
pm.Files[j] = ProcessedModelFile{
|
||||
Path: f.Path,
|
||||
Size: f.Size,
|
||||
SHA256: f.SHA256,
|
||||
IsReadme: f.IsReadme,
|
||||
FileType: fileType,
|
||||
}
|
||||
}
|
||||
|
||||
processedModel.Files = processedFiles
|
||||
|
||||
// Set preferred model file
|
||||
if preferredModelFile != nil {
|
||||
for _, file := range processedFiles {
|
||||
if file.Path == preferredModelFile.Path {
|
||||
processedModel.PreferredModelFile = &file
|
||||
break
|
||||
}
|
||||
if f.Path == preferred.Path {
|
||||
copyFile := pm.Files[j]
|
||||
pm.PreferredModelFile = ©File
|
||||
}
|
||||
if f.IsReadme {
|
||||
copyFile := pm.Files[j]
|
||||
pm.ReadmeFile = ©File
|
||||
}
|
||||
}
|
||||
|
||||
// Print file information
|
||||
outputBuilder.WriteString(fmt.Sprintf(" Files found: %d\n", len(details.Files)))
|
||||
// Deterministic README resolution: follow base_model tag if set.
|
||||
// Keep the raw (HTML-bearing) README around while we extract the
|
||||
// icon, then strip it down to a plain-text description for the
|
||||
// `description:` YAML field.
|
||||
readme, err := resolveReadme(client, m.ModelID, m.Tags)
|
||||
if err != nil {
|
||||
fmt.Printf(" Warning: failed to fetch README: %v\n", err)
|
||||
}
|
||||
pm.ReadmeContent = readme
|
||||
|
||||
if preferredModelFile != nil {
|
||||
outputBuilder.WriteString(fmt.Sprintf(" Preferred Model File: %s (SHA256: %s)\n",
|
||||
preferredModelFile.Path,
|
||||
preferredModelFile.SHA256))
|
||||
} else {
|
||||
outputBuilder.WriteString(fmt.Sprintf(" No model file found with quantization preferences: %v\n", quantizationPreferences))
|
||||
pm.License = licenseFromTags(m.Tags)
|
||||
pm.Tags = curatedTags(m.Tags)
|
||||
pm.Icon = extractModelIcon(pm)
|
||||
|
||||
if pm.ReadmeContent != "" {
|
||||
pm.ReadmeContent = extractDescription(pm.ReadmeContent)
|
||||
pm.ReadmeContentPreview = truncateString(pm.ReadmeContent, 200)
|
||||
}
|
||||
|
||||
if details.ReadmeFile != nil {
|
||||
outputBuilder.WriteString(fmt.Sprintf(" README File: %s\n", details.ReadmeFile.Path))
|
||||
|
||||
// Find and set readme file
|
||||
for _, file := range processedFiles {
|
||||
if file.IsReadme {
|
||||
processedModel.ReadmeFile = &file
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Println("Getting real readme for", model.ModelID, "waiting...")
|
||||
// Use agent to get the real readme and prepare the model description
|
||||
readmeContent, err := getRealReadme(context.Background(), model.ModelID)
|
||||
if err == nil {
|
||||
processedModel.ReadmeContent = readmeContent
|
||||
processedModel.ReadmeContentPreview = truncateString(readmeContent, 200)
|
||||
outputBuilder.WriteString(fmt.Sprintf(" README Content Preview: %s\n",
|
||||
processedModel.ReadmeContentPreview))
|
||||
} else {
|
||||
fmt.Printf(" Warning: Failed to get real readme: %v\n", err)
|
||||
}
|
||||
fmt.Println("Real readme got", readmeContent)
|
||||
|
||||
// Extract metadata (tags, license) from README using LLM
|
||||
fmt.Println("Extracting metadata for", model.ModelID, "waiting...")
|
||||
tags, license, err := extractModelMetadata(context.Background(), processedModel)
|
||||
if err == nil {
|
||||
processedModel.Tags = tags
|
||||
processedModel.License = license
|
||||
outputBuilder.WriteString(fmt.Sprintf(" Tags: %v\n", tags))
|
||||
outputBuilder.WriteString(fmt.Sprintf(" License: %s\n", license))
|
||||
} else {
|
||||
fmt.Printf(" Warning: Failed to extract metadata: %v\n", err)
|
||||
}
|
||||
|
||||
// Extract icon from README or use HuggingFace avatar
|
||||
icon := extractModelIcon(processedModel)
|
||||
if icon != "" {
|
||||
processedModel.Icon = icon
|
||||
outputBuilder.WriteString(fmt.Sprintf(" Icon: %s\n", icon))
|
||||
}
|
||||
// Get README content
|
||||
// readmeContent, err := client.GetReadmeContent(model.ModelID, details.ReadmeFile.Path)
|
||||
// if err == nil {
|
||||
// processedModel.ReadmeContent = readmeContent
|
||||
// processedModel.ReadmeContentPreview = truncateString(readmeContent, 200)
|
||||
// outputBuilder.WriteString(fmt.Sprintf(" README Content Preview: %s\n",
|
||||
// processedModel.ReadmeContentPreview))
|
||||
// }
|
||||
}
|
||||
|
||||
// Print all files with their checksums
|
||||
outputBuilder.WriteString(" All Files:\n")
|
||||
for _, file := range processedFiles {
|
||||
outputBuilder.WriteString(fmt.Sprintf(" - %s (%s, %d bytes", file.Path, file.FileType, file.Size))
|
||||
if file.SHA256 != "" {
|
||||
outputBuilder.WriteString(fmt.Sprintf(", SHA256: %s", file.SHA256))
|
||||
}
|
||||
outputBuilder.WriteString(")\n")
|
||||
}
|
||||
|
||||
outputBuilder.WriteString("\n")
|
||||
result.Models = append(result.Models, processedModel)
|
||||
fmt.Printf(" License: %s, Tags: %v, Icon: %s\n", pm.License, pm.Tags, pm.Icon)
|
||||
processed = append(processed, pm)
|
||||
}
|
||||
|
||||
result.FormattedOutput = outputBuilder.String()
|
||||
return result, nil
|
||||
if len(processed) == 0 {
|
||||
fmt.Println("No processable models after detail fetch.")
|
||||
writeSummary(AddedModelSummary{
|
||||
SearchTerm: searchTerm,
|
||||
TotalFound: totalFound,
|
||||
ModelsAdded: 0,
|
||||
Quantization: quantization,
|
||||
ProcessingTime: time.Since(startTime).String(),
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
// Phase 5: write YAML entries.
|
||||
var addedIDs, addedURLs []string
|
||||
for _, pm := range processed {
|
||||
addedIDs = append(addedIDs, pm.ModelID)
|
||||
addedURLs = append(addedURLs, "https://huggingface.co/"+pm.ModelID)
|
||||
}
|
||||
|
||||
fmt.Println("Generating YAML entries for selected models...")
|
||||
if err := generateYAMLForModels(context.Background(), processed, quantization); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error generating YAML entries: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
writeSummary(AddedModelSummary{
|
||||
SearchTerm: searchTerm,
|
||||
TotalFound: totalFound,
|
||||
ModelsAdded: len(addedIDs),
|
||||
AddedModelIDs: addedIDs,
|
||||
AddedModelURLs: addedURLs,
|
||||
Quantization: quantization,
|
||||
ProcessingTime: time.Since(startTime).String(),
|
||||
})
|
||||
}
|
||||
|
||||
func writeSummary(summary AddedModelSummary) {
|
||||
data, err := json.MarshalIndent(summary, "", " ")
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error marshaling summary: %v\n", err)
|
||||
return
|
||||
}
|
||||
if err := os.WriteFile("gallery-agent-summary.json", data, 0644); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error writing summary file: %v\n", err)
|
||||
return
|
||||
}
|
||||
fmt.Println("Summary written to gallery-agent-summary.json")
|
||||
}
|
||||
|
||||
func truncateString(s string, maxLen int) string {
|
||||
|
|
@ -381,3 +277,4 @@ func truncateString(s string, maxLen int) string {
|
|||
}
|
||||
return s[:maxLen] + "..."
|
||||
}
|
||||
|
||||
|
|
|
|||
46
.github/gallery-agent/tools.go
vendored
46
.github/gallery-agent/tools.go
vendored
|
|
@ -1,46 +0,0 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
hfapi "github.com/mudler/LocalAI/pkg/huggingface-api"
|
||||
openai "github.com/sashabaranov/go-openai"
|
||||
jsonschema "github.com/sashabaranov/go-openai/jsonschema"
|
||||
)
|
||||
|
||||
// Get repository README from HF
|
||||
type HFReadmeTool struct {
|
||||
client *hfapi.Client
|
||||
}
|
||||
|
||||
func (s *HFReadmeTool) Execute(args map[string]any) (string, any, error) {
|
||||
q, ok := args["repository"].(string)
|
||||
if !ok {
|
||||
return "", nil, fmt.Errorf("no query")
|
||||
}
|
||||
readme, err := s.client.GetReadmeContent(q, "README.md")
|
||||
if err != nil {
|
||||
return "", nil, err
|
||||
}
|
||||
return readme, nil, nil
|
||||
}
|
||||
|
||||
func (s *HFReadmeTool) Tool() openai.Tool {
|
||||
return openai.Tool{
|
||||
Type: openai.ToolTypeFunction,
|
||||
Function: &openai.FunctionDefinition{
|
||||
Name: "hf_readme",
|
||||
Description: "A tool to get the README content of a huggingface repository",
|
||||
Parameters: jsonschema.Definition{
|
||||
Type: jsonschema.Object,
|
||||
Properties: map[string]jsonschema.Definition{
|
||||
"repository": {
|
||||
Type: jsonschema.String,
|
||||
Description: "The huggingface repository to get the README content of",
|
||||
},
|
||||
},
|
||||
Required: []string{"repository"},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
551
.github/workflows/backend.yml
vendored
551
.github/workflows/backend.yml
vendored
|
|
@ -30,6 +30,7 @@ jobs:
|
|||
skip-drivers: ${{ matrix.skip-drivers }}
|
||||
context: ${{ matrix.context }}
|
||||
ubuntu-version: ${{ matrix.ubuntu-version }}
|
||||
amdgpu-targets: ${{ matrix.amdgpu-targets }}
|
||||
secrets:
|
||||
dockerUsername: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
dockerPassword: ${{ secrets.DOCKERHUB_PASSWORD }}
|
||||
|
|
@ -53,6 +54,32 @@ jobs:
|
|||
dockerfile: "./backend/Dockerfile.python"
|
||||
context: "./"
|
||||
ubuntu-version: '2204'
|
||||
- build-type: ''
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-cpu-vllm'
|
||||
runs-on: 'ubuntu-latest'
|
||||
base-image: "ubuntu:24.04"
|
||||
skip-drivers: 'true'
|
||||
backend: "vllm"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: ''
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-cpu-sglang'
|
||||
runs-on: 'ubuntu-latest'
|
||||
base-image: "ubuntu:24.04"
|
||||
skip-drivers: 'true'
|
||||
backend: "sglang"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: ''
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
|
|
@ -92,6 +119,25 @@ jobs:
|
|||
dockerfile: "./backend/Dockerfile.python"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
# tinygrad ships a single image — its CPU device uses bundled
|
||||
# libLLVM, and its CUDA / HIP / Metal devices dlopen the host
|
||||
# driver libraries at runtime via tinygrad's ctypes autogen
|
||||
# wrappers. There is no toolkit-version split because tinygrad
|
||||
# generates kernels itself (PTX renderer for CUDA) and never
|
||||
# links against cuDNN/cuBLAS/torch.
|
||||
- build-type: ''
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-tinygrad'
|
||||
runs-on: 'ubuntu-latest'
|
||||
base-image: "ubuntu:24.04"
|
||||
skip-drivers: 'true'
|
||||
backend: "tinygrad"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: ''
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
|
|
@ -105,6 +151,19 @@ jobs:
|
|||
dockerfile: "./backend/Dockerfile.python"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: ''
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-cpu-faster-whisper'
|
||||
runs-on: 'ubuntu-latest'
|
||||
base-image: "ubuntu:24.04"
|
||||
skip-drivers: 'true'
|
||||
backend: "faster-whisper"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: ''
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
|
|
@ -327,6 +386,19 @@ jobs:
|
|||
dockerfile: "./backend/Dockerfile.llama-cpp"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "12"
|
||||
cuda-minor-version: "8"
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-nvidia-cuda-12-turboquant'
|
||||
runs-on: 'bigger-runner'
|
||||
base-image: "ubuntu:24.04"
|
||||
skip-drivers: 'false'
|
||||
backend: "turboquant"
|
||||
dockerfile: "./backend/Dockerfile.turboquant"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "12"
|
||||
cuda-minor-version: "8"
|
||||
|
|
@ -353,6 +425,19 @@ jobs:
|
|||
dockerfile: "./backend/Dockerfile.python"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "12"
|
||||
cuda-minor-version: "8"
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-nvidia-cuda-12-sglang'
|
||||
runs-on: 'arc-runner-set'
|
||||
base-image: "ubuntu:24.04"
|
||||
skip-drivers: 'false'
|
||||
backend: "sglang"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "12"
|
||||
cuda-minor-version: "8"
|
||||
|
|
@ -561,6 +646,19 @@ jobs:
|
|||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "12"
|
||||
cuda-minor-version: "8"
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-nvidia-cuda-12-sam3-cpp'
|
||||
runs-on: 'ubuntu-latest'
|
||||
base-image: "ubuntu:24.04"
|
||||
skip-drivers: 'false'
|
||||
backend: "sam3-cpp"
|
||||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "12"
|
||||
cuda-minor-version: "8"
|
||||
|
|
@ -587,6 +685,19 @@ jobs:
|
|||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "12"
|
||||
cuda-minor-version: "8"
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-nvidia-cuda-12-qwen3-tts-cpp'
|
||||
runs-on: 'ubuntu-latest'
|
||||
base-image: "ubuntu:24.04"
|
||||
skip-drivers: 'false'
|
||||
backend: "qwen3-tts-cpp"
|
||||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "12"
|
||||
cuda-minor-version: "8"
|
||||
|
|
@ -744,6 +855,19 @@ jobs:
|
|||
dockerfile: "./backend/Dockerfile.llama-cpp"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "13"
|
||||
cuda-minor-version: "0"
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-nvidia-cuda-13-turboquant'
|
||||
runs-on: 'ubuntu-latest'
|
||||
base-image: "ubuntu:24.04"
|
||||
skip-drivers: 'false'
|
||||
backend: "turboquant"
|
||||
dockerfile: "./backend/Dockerfile.turboquant"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "13"
|
||||
cuda-minor-version: "0"
|
||||
|
|
@ -757,6 +881,19 @@ jobs:
|
|||
backend: "llama-cpp"
|
||||
dockerfile: "./backend/Dockerfile.llama-cpp"
|
||||
context: "./"
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "13"
|
||||
cuda-minor-version: "0"
|
||||
platforms: 'linux/arm64'
|
||||
skip-drivers: 'false'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-nvidia-l4t-cuda-13-arm64-turboquant'
|
||||
base-image: "ubuntu:24.04"
|
||||
runs-on: 'ubuntu-24.04-arm'
|
||||
ubuntu-version: '2404'
|
||||
backend: "turboquant"
|
||||
dockerfile: "./backend/Dockerfile.turboquant"
|
||||
context: "./"
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "13"
|
||||
cuda-minor-version: "0"
|
||||
|
|
@ -965,6 +1102,32 @@ jobs:
|
|||
backend: "mlx-distributed"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
context: "./"
|
||||
- build-type: 'l4t'
|
||||
cuda-major-version: "13"
|
||||
cuda-minor-version: "0"
|
||||
platforms: 'linux/arm64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-nvidia-l4t-cuda-13-arm64-whisperx'
|
||||
runs-on: 'ubuntu-24.04-arm'
|
||||
base-image: "ubuntu:24.04"
|
||||
skip-drivers: 'false'
|
||||
ubuntu-version: '2404'
|
||||
backend: "whisperx"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
context: "./"
|
||||
- build-type: 'l4t'
|
||||
cuda-major-version: "13"
|
||||
cuda-minor-version: "0"
|
||||
platforms: 'linux/arm64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-nvidia-l4t-cuda-13-arm64-faster-whisper'
|
||||
runs-on: 'ubuntu-24.04-arm'
|
||||
base-image: "ubuntu:24.04"
|
||||
skip-drivers: 'false'
|
||||
ubuntu-version: '2404'
|
||||
backend: "faster-whisper"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
context: "./"
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "13"
|
||||
cuda-minor-version: "0"
|
||||
|
|
@ -1108,6 +1271,32 @@ jobs:
|
|||
backend: "stablediffusion-ggml"
|
||||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "13"
|
||||
cuda-minor-version: "0"
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-nvidia-cuda-13-sam3-cpp'
|
||||
runs-on: 'ubuntu-latest'
|
||||
base-image: "ubuntu:24.04"
|
||||
skip-drivers: 'false'
|
||||
backend: "sam3-cpp"
|
||||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "13"
|
||||
cuda-minor-version: "0"
|
||||
platforms: 'linux/arm64'
|
||||
skip-drivers: 'false'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-nvidia-l4t-cuda-13-arm64-sam3-cpp'
|
||||
base-image: "ubuntu:24.04"
|
||||
ubuntu-version: '2404'
|
||||
runs-on: 'ubuntu-24.04-arm'
|
||||
backend: "sam3-cpp"
|
||||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "13"
|
||||
cuda-minor-version: "0"
|
||||
|
|
@ -1147,6 +1336,19 @@ jobs:
|
|||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "13"
|
||||
cuda-minor-version: "0"
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-nvidia-cuda-13-qwen3-tts-cpp'
|
||||
runs-on: 'ubuntu-latest'
|
||||
base-image: "ubuntu:24.04"
|
||||
skip-drivers: 'false'
|
||||
backend: "qwen3-tts-cpp"
|
||||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "13"
|
||||
cuda-minor-version: "0"
|
||||
|
|
@ -1160,6 +1362,19 @@ jobs:
|
|||
backend: "acestep-cpp"
|
||||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "13"
|
||||
cuda-minor-version: "0"
|
||||
platforms: 'linux/arm64'
|
||||
skip-drivers: 'false'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-nvidia-l4t-cuda-13-arm64-qwen3-tts-cpp'
|
||||
base-image: "ubuntu:24.04"
|
||||
ubuntu-version: '2404'
|
||||
runs-on: 'ubuntu-24.04-arm'
|
||||
backend: "qwen3-tts-cpp"
|
||||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "13"
|
||||
cuda-minor-version: "0"
|
||||
|
|
@ -1181,7 +1396,7 @@ jobs:
|
|||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-rerankers'
|
||||
runs-on: 'ubuntu-latest'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
skip-drivers: 'false'
|
||||
backend: "rerankers"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
|
|
@ -1194,12 +1409,25 @@ jobs:
|
|||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-llama-cpp'
|
||||
runs-on: 'ubuntu-latest'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
skip-drivers: 'false'
|
||||
backend: "llama-cpp"
|
||||
dockerfile: "./backend/Dockerfile.llama-cpp"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'hipblas'
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-turboquant'
|
||||
runs-on: 'ubuntu-latest'
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
skip-drivers: 'false'
|
||||
backend: "turboquant"
|
||||
dockerfile: "./backend/Dockerfile.turboquant"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'hipblas'
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
|
|
@ -1207,7 +1435,7 @@ jobs:
|
|||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-vllm'
|
||||
runs-on: 'arc-runner-set'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
skip-drivers: 'false'
|
||||
backend: "vllm"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
|
|
@ -1220,12 +1448,25 @@ jobs:
|
|||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-vllm-omni'
|
||||
runs-on: 'arc-runner-set'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
skip-drivers: 'false'
|
||||
backend: "vllm-omni"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'hipblas'
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-sglang'
|
||||
runs-on: 'arc-runner-set'
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
skip-drivers: 'false'
|
||||
backend: "sglang"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'hipblas'
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
|
|
@ -1233,7 +1474,7 @@ jobs:
|
|||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-transformers'
|
||||
runs-on: 'arc-runner-set'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
skip-drivers: 'false'
|
||||
backend: "transformers"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
|
|
@ -1246,7 +1487,7 @@ jobs:
|
|||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-diffusers'
|
||||
runs-on: 'arc-runner-set'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
skip-drivers: 'false'
|
||||
backend: "diffusers"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
|
|
@ -1259,7 +1500,7 @@ jobs:
|
|||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-ace-step'
|
||||
runs-on: 'arc-runner-set'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
skip-drivers: 'false'
|
||||
backend: "ace-step"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
|
|
@ -1273,7 +1514,7 @@ jobs:
|
|||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-kokoro'
|
||||
runs-on: 'arc-runner-set'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
skip-drivers: 'false'
|
||||
backend: "kokoro"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
|
|
@ -1286,7 +1527,7 @@ jobs:
|
|||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-vibevoice'
|
||||
runs-on: 'arc-runner-set'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
skip-drivers: 'false'
|
||||
backend: "vibevoice"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
|
|
@ -1299,7 +1540,7 @@ jobs:
|
|||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-qwen-asr'
|
||||
runs-on: 'arc-runner-set'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
skip-drivers: 'false'
|
||||
backend: "qwen-asr"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
|
|
@ -1312,7 +1553,7 @@ jobs:
|
|||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-nemo'
|
||||
runs-on: 'arc-runner-set'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
skip-drivers: 'false'
|
||||
backend: "nemo"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
|
|
@ -1325,7 +1566,7 @@ jobs:
|
|||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-qwen-tts'
|
||||
runs-on: 'arc-runner-set'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
skip-drivers: 'false'
|
||||
backend: "qwen-tts"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
|
|
@ -1338,7 +1579,7 @@ jobs:
|
|||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-fish-speech'
|
||||
runs-on: 'arc-runner-set'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
skip-drivers: 'false'
|
||||
backend: "fish-speech"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
|
|
@ -1351,7 +1592,7 @@ jobs:
|
|||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-voxcpm'
|
||||
runs-on: 'arc-runner-set'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
skip-drivers: 'false'
|
||||
backend: "voxcpm"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
|
|
@ -1364,7 +1605,7 @@ jobs:
|
|||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-pocket-tts'
|
||||
runs-on: 'arc-runner-set'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
skip-drivers: 'false'
|
||||
backend: "pocket-tts"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
|
|
@ -1377,7 +1618,7 @@ jobs:
|
|||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-faster-whisper'
|
||||
runs-on: 'bigger-runner'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
skip-drivers: 'false'
|
||||
backend: "faster-whisper"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
|
|
@ -1390,7 +1631,7 @@ jobs:
|
|||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-whisperx'
|
||||
runs-on: 'bigger-runner'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
skip-drivers: 'false'
|
||||
backend: "whisperx"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
|
|
@ -1403,7 +1644,7 @@ jobs:
|
|||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-coqui'
|
||||
runs-on: 'bigger-runner'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
skip-drivers: 'false'
|
||||
backend: "coqui"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
|
|
@ -1436,6 +1677,19 @@ jobs:
|
|||
dockerfile: "./backend/Dockerfile.llama-cpp"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'sycl_f32'
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-intel-sycl-f32-turboquant'
|
||||
runs-on: 'ubuntu-latest'
|
||||
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
||||
skip-drivers: 'false'
|
||||
backend: "turboquant"
|
||||
dockerfile: "./backend/Dockerfile.turboquant"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'sycl_f16'
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
|
|
@ -1449,6 +1703,19 @@ jobs:
|
|||
dockerfile: "./backend/Dockerfile.llama-cpp"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'sycl_f16'
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-intel-sycl-f16-turboquant'
|
||||
runs-on: 'ubuntu-latest'
|
||||
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
||||
skip-drivers: 'false'
|
||||
backend: "turboquant"
|
||||
dockerfile: "./backend/Dockerfile.turboquant"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'intel'
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
|
|
@ -1462,6 +1729,19 @@ jobs:
|
|||
dockerfile: "./backend/Dockerfile.python"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'intel'
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-intel-sglang'
|
||||
runs-on: 'arc-runner-set'
|
||||
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
||||
skip-drivers: 'false'
|
||||
backend: "sglang"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'intel'
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
|
|
@ -1644,6 +1924,32 @@ jobs:
|
|||
dockerfile: "./backend/Dockerfile.python"
|
||||
context: "./"
|
||||
ubuntu-version: '2204'
|
||||
- build-type: 'l4t'
|
||||
cuda-major-version: "12"
|
||||
cuda-minor-version: "0"
|
||||
platforms: 'linux/arm64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-nvidia-l4t-whisperx'
|
||||
runs-on: 'ubuntu-24.04-arm'
|
||||
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
||||
skip-drivers: 'true'
|
||||
backend: "whisperx"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
context: "./"
|
||||
ubuntu-version: '2204'
|
||||
- build-type: 'l4t'
|
||||
cuda-major-version: "12"
|
||||
cuda-minor-version: "0"
|
||||
platforms: 'linux/arm64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-nvidia-l4t-faster-whisper'
|
||||
runs-on: 'ubuntu-24.04-arm'
|
||||
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
||||
skip-drivers: 'true'
|
||||
backend: "faster-whisper"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
context: "./"
|
||||
ubuntu-version: '2204'
|
||||
# SYCL additional backends
|
||||
- build-type: 'intel'
|
||||
cuda-major-version: ""
|
||||
|
|
@ -1802,6 +2108,32 @@ jobs:
|
|||
dockerfile: "./backend/Dockerfile.llama-cpp"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: ''
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64,linux/arm64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-cpu-turboquant'
|
||||
runs-on: 'bigger-runner'
|
||||
base-image: "ubuntu:24.04"
|
||||
skip-drivers: 'false'
|
||||
backend: "turboquant"
|
||||
dockerfile: "./backend/Dockerfile.turboquant"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: ''
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-cpu-ik-llama-cpp'
|
||||
runs-on: 'bigger-runner'
|
||||
base-image: "ubuntu:24.04"
|
||||
skip-drivers: 'false'
|
||||
backend: "ik-llama-cpp"
|
||||
dockerfile: "./backend/Dockerfile.ik-llama-cpp"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "12"
|
||||
cuda-minor-version: "0"
|
||||
|
|
@ -1815,6 +2147,19 @@ jobs:
|
|||
dockerfile: "./backend/Dockerfile.llama-cpp"
|
||||
context: "./"
|
||||
ubuntu-version: '2204'
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "12"
|
||||
cuda-minor-version: "0"
|
||||
platforms: 'linux/arm64'
|
||||
skip-drivers: 'false'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-nvidia-l4t-arm64-turboquant'
|
||||
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
||||
runs-on: 'ubuntu-24.04-arm'
|
||||
backend: "turboquant"
|
||||
dockerfile: "./backend/Dockerfile.turboquant"
|
||||
context: "./"
|
||||
ubuntu-version: '2204'
|
||||
- build-type: 'vulkan'
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
|
|
@ -1828,6 +2173,19 @@ jobs:
|
|||
dockerfile: "./backend/Dockerfile.llama-cpp"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'vulkan'
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64,linux/arm64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-vulkan-turboquant'
|
||||
runs-on: 'bigger-runner'
|
||||
base-image: "ubuntu:24.04"
|
||||
skip-drivers: 'false'
|
||||
backend: "turboquant"
|
||||
dockerfile: "./backend/Dockerfile.turboquant"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
# Stablediffusion-ggml
|
||||
- build-type: ''
|
||||
cuda-major-version: ""
|
||||
|
|
@ -1842,6 +2200,59 @@ jobs:
|
|||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
# sam3-cpp
|
||||
- build-type: ''
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-cpu-sam3-cpp'
|
||||
runs-on: 'ubuntu-latest'
|
||||
base-image: "ubuntu:24.04"
|
||||
skip-drivers: 'false'
|
||||
backend: "sam3-cpp"
|
||||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'sycl_f32'
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-intel-sycl-f32-sam3-cpp'
|
||||
runs-on: 'ubuntu-latest'
|
||||
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
||||
skip-drivers: 'false'
|
||||
backend: "sam3-cpp"
|
||||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'sycl_f16'
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-intel-sycl-f16-sam3-cpp'
|
||||
runs-on: 'ubuntu-latest'
|
||||
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
||||
skip-drivers: 'false'
|
||||
backend: "sam3-cpp"
|
||||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'vulkan'
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64,linux/arm64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-vulkan-sam3-cpp'
|
||||
runs-on: 'ubuntu-latest'
|
||||
base-image: "ubuntu:24.04"
|
||||
skip-drivers: 'false'
|
||||
backend: "sam3-cpp"
|
||||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'sycl_f32'
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
|
|
@ -1894,6 +2305,19 @@ jobs:
|
|||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
ubuntu-version: '2204'
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "12"
|
||||
cuda-minor-version: "0"
|
||||
platforms: 'linux/arm64'
|
||||
skip-drivers: 'false'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-nvidia-l4t-arm64-sam3-cpp'
|
||||
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
||||
runs-on: 'ubuntu-24.04-arm'
|
||||
backend: "sam3-cpp"
|
||||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
ubuntu-version: '2204'
|
||||
# whisper
|
||||
- build-type: ''
|
||||
cuda-major-version: ""
|
||||
|
|
@ -1966,7 +2390,7 @@ jobs:
|
|||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-whisper'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
runs-on: 'ubuntu-latest'
|
||||
skip-drivers: 'false'
|
||||
backend: "whisper"
|
||||
|
|
@ -2045,13 +2469,92 @@ jobs:
|
|||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-acestep-cpp'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
runs-on: 'ubuntu-latest'
|
||||
skip-drivers: 'false'
|
||||
backend: "acestep-cpp"
|
||||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
# qwen3-tts-cpp
|
||||
- build-type: ''
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64,linux/arm64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-cpu-qwen3-tts-cpp'
|
||||
runs-on: 'ubuntu-latest'
|
||||
base-image: "ubuntu:24.04"
|
||||
skip-drivers: 'false'
|
||||
backend: "qwen3-tts-cpp"
|
||||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'sycl_f32'
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-intel-sycl-f32-qwen3-tts-cpp'
|
||||
runs-on: 'ubuntu-latest'
|
||||
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
||||
skip-drivers: 'false'
|
||||
backend: "qwen3-tts-cpp"
|
||||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'sycl_f16'
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-intel-sycl-f16-qwen3-tts-cpp'
|
||||
runs-on: 'ubuntu-latest'
|
||||
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
||||
skip-drivers: 'false'
|
||||
backend: "qwen3-tts-cpp"
|
||||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'vulkan'
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64,linux/arm64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-vulkan-qwen3-tts-cpp'
|
||||
runs-on: 'ubuntu-latest'
|
||||
base-image: "ubuntu:24.04"
|
||||
skip-drivers: 'false'
|
||||
backend: "qwen3-tts-cpp"
|
||||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
- build-type: 'cublas'
|
||||
cuda-major-version: "12"
|
||||
cuda-minor-version: "0"
|
||||
platforms: 'linux/arm64'
|
||||
skip-drivers: 'false'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-nvidia-l4t-arm64-qwen3-tts-cpp'
|
||||
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
||||
runs-on: 'ubuntu-24.04-arm'
|
||||
backend: "qwen3-tts-cpp"
|
||||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
ubuntu-version: '2204'
|
||||
- build-type: 'hipblas'
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-qwen3-tts-cpp'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
runs-on: 'ubuntu-latest'
|
||||
skip-drivers: 'false'
|
||||
backend: "qwen3-tts-cpp"
|
||||
dockerfile: "./backend/Dockerfile.golang"
|
||||
context: "./"
|
||||
ubuntu-version: '2404'
|
||||
# voxtral
|
||||
- build-type: ''
|
||||
cuda-major-version: ""
|
||||
|
|
@ -2168,7 +2671,7 @@ jobs:
|
|||
# platforms: 'linux/amd64'
|
||||
# tag-latest: 'auto'
|
||||
# tag-suffix: '-gpu-hipblas-rfdetr'
|
||||
# base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
# base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
# runs-on: 'ubuntu-latest'
|
||||
# skip-drivers: 'false'
|
||||
# backend: "rfdetr"
|
||||
|
|
@ -2209,7 +2712,7 @@ jobs:
|
|||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-rocm-hipblas-neutts'
|
||||
runs-on: 'arc-runner-set'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
skip-drivers: 'false'
|
||||
backend: "neutts"
|
||||
dockerfile: "./backend/Dockerfile.python"
|
||||
|
|
@ -2357,6 +2860,10 @@ jobs:
|
|||
tag-suffix: "-metal-darwin-arm64-acestep-cpp"
|
||||
build-type: "metal"
|
||||
lang: "go"
|
||||
- backend: "qwen3-tts-cpp"
|
||||
tag-suffix: "-metal-darwin-arm64-qwen3-tts-cpp"
|
||||
build-type: "metal"
|
||||
lang: "go"
|
||||
- backend: "voxtral"
|
||||
tag-suffix: "-metal-darwin-arm64-voxtral"
|
||||
build-type: "metal"
|
||||
|
|
|
|||
7
.github/workflows/backend_build.yml
vendored
7
.github/workflows/backend_build.yml
vendored
|
|
@ -58,6 +58,11 @@ on:
|
|||
required: false
|
||||
default: '2204'
|
||||
type: string
|
||||
amdgpu-targets:
|
||||
description: 'AMD GPU targets for ROCm/HIP builds'
|
||||
required: false
|
||||
default: 'gfx908,gfx90a,gfx942,gfx950,gfx1030,gfx1100,gfx1101,gfx1102,gfx1151,gfx1200,gfx1201'
|
||||
type: string
|
||||
secrets:
|
||||
dockerUsername:
|
||||
required: false
|
||||
|
|
@ -214,6 +219,7 @@ jobs:
|
|||
BASE_IMAGE=${{ inputs.base-image }}
|
||||
BACKEND=${{ inputs.backend }}
|
||||
UBUNTU_VERSION=${{ inputs.ubuntu-version }}
|
||||
AMDGPU_TARGETS=${{ inputs.amdgpu-targets }}
|
||||
context: ${{ inputs.context }}
|
||||
file: ${{ inputs.dockerfile }}
|
||||
cache-from: type=gha
|
||||
|
|
@ -235,6 +241,7 @@ jobs:
|
|||
BASE_IMAGE=${{ inputs.base-image }}
|
||||
BACKEND=${{ inputs.backend }}
|
||||
UBUNTU_VERSION=${{ inputs.ubuntu-version }}
|
||||
AMDGPU_TARGETS=${{ inputs.amdgpu-targets }}
|
||||
context: ${{ inputs.context }}
|
||||
file: ${{ inputs.dockerfile }}
|
||||
cache-from: type=gha
|
||||
|
|
|
|||
16
.github/workflows/bump_deps.yaml
vendored
16
.github/workflows/bump_deps.yaml
vendored
|
|
@ -14,6 +14,14 @@ jobs:
|
|||
variable: "LLAMA_VERSION"
|
||||
branch: "master"
|
||||
file: "backend/cpp/llama-cpp/Makefile"
|
||||
- repository: "ikawrakow/ik_llama.cpp"
|
||||
variable: "IK_LLAMA_VERSION"
|
||||
branch: "main"
|
||||
file: "backend/cpp/ik-llama-cpp/Makefile"
|
||||
- repository: "TheTom/llama-cpp-turboquant"
|
||||
variable: "TURBOQUANT_VERSION"
|
||||
branch: "feature/turboquant-kv-cache"
|
||||
file: "backend/cpp/turboquant/Makefile"
|
||||
- repository: "ggml-org/whisper.cpp"
|
||||
variable: "WHISPER_CPP_VERSION"
|
||||
branch: "master"
|
||||
|
|
@ -34,6 +42,14 @@ jobs:
|
|||
variable: "ACESTEP_CPP_VERSION"
|
||||
branch: "master"
|
||||
file: "backend/go/acestep-cpp/Makefile"
|
||||
- repository: "PABannier/sam3.cpp"
|
||||
variable: "SAM3_VERSION"
|
||||
branch: "main"
|
||||
file: "backend/go/sam3-cpp/Makefile"
|
||||
- repository: "predict-woo/qwen3-tts.cpp"
|
||||
variable: "QWEN3TTS_CPP_VERSION"
|
||||
branch: "main"
|
||||
file: "backend/go/qwen3-tts-cpp/Makefile"
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v6
|
||||
|
|
|
|||
82
.github/workflows/gallery-agent.yaml
vendored
82
.github/workflows/gallery-agent.yaml
vendored
|
|
@ -48,21 +48,71 @@ jobs:
|
|||
go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.34.2
|
||||
go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@1958fcbe2ca8bd93af633f11e97d44e567e945af
|
||||
PATH="$PATH:$HOME/go/bin" make protogen-go
|
||||
- uses: mudler/localai-github-action@v1.1
|
||||
with:
|
||||
model: 'https://huggingface.co/unsloth/Qwen3.5-2B-GGUF'
|
||||
- name: Process gallery-agent PR commands
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.UPDATE_BOT_TOKEN }}
|
||||
REPO: ${{ github.repository }}
|
||||
SEARCH: 'gallery agent in:title'
|
||||
run: |
|
||||
# Walk open gallery-agent PRs and act on maintainer comments:
|
||||
# /gallery-agent blacklist → label `gallery-agent/blacklisted` + close (never repropose)
|
||||
# /gallery-agent recreate → close without label (next run may repropose)
|
||||
# Only comments from OWNER / MEMBER / COLLABORATOR are honored so
|
||||
# random users can't drive the bot.
|
||||
gh label create gallery-agent/blacklisted \
|
||||
--repo "$REPO" --color ededed \
|
||||
--description "gallery-agent must not repropose this model" 2>/dev/null || true
|
||||
|
||||
prs=$(gh pr list --repo "$REPO" --state open --search "$SEARCH" --json number --jq '.[].number')
|
||||
for pr in $prs; do
|
||||
cmds=$(gh pr view "$pr" --repo "$REPO" --json comments \
|
||||
--jq '.comments[] | select(.authorAssociation=="OWNER" or .authorAssociation=="MEMBER" or .authorAssociation=="COLLABORATOR") | .body')
|
||||
if echo "$cmds" | grep -qE '(^|[[:space:]])/gallery-agent[[:space:]]+blacklist([[:space:]]|$)'; then
|
||||
echo "PR #$pr: blacklist command found"
|
||||
gh pr edit "$pr" --repo "$REPO" --add-label gallery-agent/blacklisted || true
|
||||
gh pr close "$pr" --repo "$REPO" --comment "Blacklisted via \`/gallery-agent blacklist\`. This model will not be reproposed." || true
|
||||
elif echo "$cmds" | grep -qE '(^|[[:space:]])/gallery-agent[[:space:]]+recreate([[:space:]]|$)'; then
|
||||
echo "PR #$pr: recreate command found"
|
||||
gh pr close "$pr" --repo "$REPO" --comment "Closed via \`/gallery-agent recreate\`. The next scheduled run will propose this model again." || true
|
||||
fi
|
||||
done
|
||||
|
||||
- name: Collect skip URLs for the gallery agent
|
||||
id: open_prs
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
REPO: ${{ github.repository }}
|
||||
SEARCH: 'gallery agent in:title'
|
||||
run: |
|
||||
# Skip set =
|
||||
# URLs from any open gallery-agent PR (avoid duplicate PRs for the same model while one is pending)
|
||||
# + URLs from closed PRs carrying the `gallery-agent/blacklisted` label (hard blacklist)
|
||||
# Plain-closed PRs without the label are ignored — closing a PR is
|
||||
# not by itself a "never propose again" signal; maintainers must
|
||||
# opt in via the /gallery-agent blacklist comment command.
|
||||
urls_open=$(gh pr list --repo "$REPO" --state open --search "$SEARCH" \
|
||||
--json body --jq '[.[].body] | join("\n")' \
|
||||
| grep -oE 'https://huggingface\.co/[^ )]+' || true)
|
||||
urls_blacklist=$(gh pr list --repo "$REPO" --state closed --search "$SEARCH" \
|
||||
--label gallery-agent/blacklisted \
|
||||
--json body --jq '[.[].body] | join("\n")' \
|
||||
| grep -oE 'https://huggingface\.co/[^ )]+' || true)
|
||||
urls=$(printf '%s\n%s\n' "$urls_open" "$urls_blacklist" | sort -u | sed '/^$/d')
|
||||
echo "Skip URLs:"
|
||||
echo "$urls"
|
||||
{
|
||||
echo "urls<<EOF"
|
||||
echo "$urls"
|
||||
echo "EOF"
|
||||
} >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Run gallery agent
|
||||
env:
|
||||
#OPENAI_MODEL: ${{ secrets.OPENAI_MODEL }}
|
||||
OPENAI_MODEL: Qwen3.5-2B-GGUF
|
||||
OPENAI_BASE_URL: "http://localhost:8080"
|
||||
OPENAI_KEY: ${{ secrets.OPENAI_KEY }}
|
||||
#OPENAI_BASE_URL: ${{ secrets.OPENAI_BASE_URL }}
|
||||
SEARCH_TERM: ${{ github.event.inputs.search_term || 'GGUF' }}
|
||||
LIMIT: ${{ github.event.inputs.limit || '15' }}
|
||||
QUANTIZATION: ${{ github.event.inputs.quantization || 'Q4_K_M' }}
|
||||
MAX_MODELS: ${{ github.event.inputs.max_models || '1' }}
|
||||
EXTRA_SKIP_URLS: ${{ steps.open_prs.outputs.urls }}
|
||||
run: |
|
||||
export GALLERY_INDEX_PATH=$PWD/gallery/index.yaml
|
||||
go run ./.github/gallery-agent
|
||||
|
|
@ -124,7 +174,21 @@ jobs:
|
|||
|
||||
**Added Models:**
|
||||
${{ steps.read_summary.outputs.added_models || '- No models added' }}
|
||||
|
||||
|
||||
### Bot commands
|
||||
|
||||
Maintainers (owner / member / collaborator) can control this PR
|
||||
by leaving a comment with one of:
|
||||
|
||||
- `/gallery-agent recreate` — close this PR; the next scheduled
|
||||
run will propose this model again (useful if the entry needs
|
||||
to be regenerated with fresh metadata).
|
||||
- `/gallery-agent blacklist` — close this PR and permanently
|
||||
prevent the gallery agent from ever reproposing this model.
|
||||
|
||||
Plain "Close" (without a command) is treated as a no-op: the
|
||||
model may be reproposed by a future run.
|
||||
|
||||
**Workflow Details:**
|
||||
- Triggered by: `${{ github.event_name }}`
|
||||
- Run ID: `${{ github.run_id }}`
|
||||
|
|
|
|||
2
.github/workflows/gh-pages.yml
vendored
2
.github/workflows/gh-pages.yml
vendored
|
|
@ -59,7 +59,7 @@ jobs:
|
|||
hugo --minify --baseURL "${{ steps.pages.outputs.base_url }}/"
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-pages-artifact@v4
|
||||
uses: actions/upload-pages-artifact@v5
|
||||
with:
|
||||
path: docs/public
|
||||
|
||||
|
|
|
|||
2
.github/workflows/image-pr.yml
vendored
2
.github/workflows/image-pr.yml
vendored
|
|
@ -59,7 +59,7 @@
|
|||
platforms: 'linux/amd64'
|
||||
tag-latest: 'false'
|
||||
tag-suffix: '-hipblas'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
grpc-base-image: "ubuntu:24.04"
|
||||
runs-on: 'ubuntu-latest'
|
||||
makeflags: "--jobs=3 --output-sync=target"
|
||||
|
|
|
|||
2
.github/workflows/image.yml
vendored
2
.github/workflows/image.yml
vendored
|
|
@ -41,7 +41,7 @@
|
|||
platforms: 'linux/amd64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-gpu-hipblas'
|
||||
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
||||
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
||||
grpc-base-image: "ubuntu:24.04"
|
||||
runs-on: 'ubuntu-latest'
|
||||
makeflags: "--jobs=3 --output-sync=target"
|
||||
|
|
|
|||
4
.github/workflows/release.yaml
vendored
4
.github/workflows/release.yaml
vendored
|
|
@ -39,7 +39,7 @@ jobs:
|
|||
run: |
|
||||
make build-launcher-darwin
|
||||
- name: Upload DMG to Release
|
||||
uses: softprops/action-gh-release@v2
|
||||
uses: softprops/action-gh-release@v3
|
||||
with:
|
||||
files: ./dist/LocalAI.dmg
|
||||
launcher-build-linux:
|
||||
|
|
@ -59,6 +59,6 @@ jobs:
|
|||
sudo apt-get install golang gcc libgl1-mesa-dev xorg-dev libxkbcommon-dev
|
||||
make build-launcher-linux
|
||||
- name: Upload Linux launcher artifacts
|
||||
uses: softprops/action-gh-release@v2
|
||||
uses: softprops/action-gh-release@v3
|
||||
with:
|
||||
files: ./local-ai-launcher-linux.tar.xz
|
||||
|
|
|
|||
223
.github/workflows/test-extra.yml
vendored
223
.github/workflows/test-extra.yml
vendored
|
|
@ -29,8 +29,15 @@ jobs:
|
|||
nemo: ${{ steps.detect.outputs.nemo }}
|
||||
voxcpm: ${{ steps.detect.outputs.voxcpm }}
|
||||
llama-cpp-quantization: ${{ steps.detect.outputs.llama-cpp-quantization }}
|
||||
llama-cpp: ${{ steps.detect.outputs.llama-cpp }}
|
||||
ik-llama-cpp: ${{ steps.detect.outputs.ik-llama-cpp }}
|
||||
turboquant: ${{ steps.detect.outputs.turboquant }}
|
||||
vllm: ${{ steps.detect.outputs.vllm }}
|
||||
sglang: ${{ steps.detect.outputs.sglang }}
|
||||
acestep-cpp: ${{ steps.detect.outputs.acestep-cpp }}
|
||||
qwen3-tts-cpp: ${{ steps.detect.outputs.qwen3-tts-cpp }}
|
||||
voxtral: ${{ steps.detect.outputs.voxtral }}
|
||||
kokoros: ${{ steps.detect.outputs.kokoros }}
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v6
|
||||
|
|
@ -463,6 +470,168 @@ jobs:
|
|||
- name: Test llama-cpp-quantization
|
||||
run: |
|
||||
make --jobs=5 --output-sync=target -C backend/python/llama-cpp-quantization test
|
||||
tests-llama-cpp-grpc:
|
||||
needs: detect-changes
|
||||
if: needs.detect-changes.outputs.llama-cpp == 'true' || needs.detect-changes.outputs.run-all == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 90
|
||||
steps:
|
||||
- name: Clone
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
submodules: true
|
||||
- name: Setup Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: '1.25.4'
|
||||
- name: Build llama-cpp backend image and run gRPC e2e tests
|
||||
run: |
|
||||
make test-extra-backend-llama-cpp
|
||||
tests-llama-cpp-grpc-transcription:
|
||||
needs: detect-changes
|
||||
if: needs.detect-changes.outputs.llama-cpp == 'true' || needs.detect-changes.outputs.run-all == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 90
|
||||
steps:
|
||||
- name: Clone
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
submodules: true
|
||||
- name: Setup Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: '1.25.4'
|
||||
- name: Build llama-cpp backend image and run audio transcription gRPC e2e tests
|
||||
run: |
|
||||
make test-extra-backend-llama-cpp-transcription
|
||||
tests-ik-llama-cpp-grpc:
|
||||
needs: detect-changes
|
||||
if: needs.detect-changes.outputs.ik-llama-cpp == 'true' || needs.detect-changes.outputs.run-all == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 90
|
||||
steps:
|
||||
- name: Clone
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
submodules: true
|
||||
- name: Setup Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: '1.25.4'
|
||||
- name: Build ik-llama-cpp backend image and run gRPC e2e tests
|
||||
run: |
|
||||
make test-extra-backend-ik-llama-cpp
|
||||
tests-turboquant-grpc:
|
||||
needs: detect-changes
|
||||
if: needs.detect-changes.outputs.turboquant == 'true' || needs.detect-changes.outputs.run-all == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 90
|
||||
steps:
|
||||
- name: Clone
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
submodules: true
|
||||
- name: Setup Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: '1.25.4'
|
||||
# Exercises the turboquant (llama.cpp fork) backend with KV-cache
|
||||
# quantization enabled. The convenience target sets
|
||||
# BACKEND_TEST_CACHE_TYPE_K / _V=q8_0, which are plumbed into the
|
||||
# ModelOptions.CacheTypeKey/Value gRPC fields. LoadModel-success +
|
||||
# backend stdout/stderr (captured by the Ginkgo suite) prove the
|
||||
# cache-type config path reaches the fork's KV-cache init.
|
||||
- name: Build turboquant backend image and run gRPC e2e tests
|
||||
run: |
|
||||
make test-extra-backend-turboquant
|
||||
# tests-vllm-grpc is currently disabled in CI.
|
||||
#
|
||||
# The prebuilt vllm CPU wheel is compiled with AVX-512 VNNI/BF16
|
||||
# instructions, and neither ubuntu-latest nor the bigger-runner pool
|
||||
# offers a stable CPU baseline that supports them — runners come
|
||||
# back with different hardware between runs and SIGILL on import of
|
||||
# vllm.model_executor.models.registry. Compiling vllm from source
|
||||
# via FROM_SOURCE=true works on any CPU but takes 30-50 minutes per
|
||||
# run, which is too slow for a smoke test.
|
||||
#
|
||||
# The test itself (tests/e2e-backends + make test-extra-backend-vllm)
|
||||
# is fully working and validated locally on a host with the right
|
||||
# SIMD baseline. Run it manually with:
|
||||
#
|
||||
# make test-extra-backend-vllm
|
||||
#
|
||||
# Re-enable this job once we have a self-hosted runner label with
|
||||
# guaranteed AVX-512 VNNI/BF16 support, or once the vllm project
|
||||
# publishes a CPU wheel with a wider baseline.
|
||||
#
|
||||
# tests-vllm-grpc:
|
||||
# needs: detect-changes
|
||||
# if: needs.detect-changes.outputs.vllm == 'true' || needs.detect-changes.outputs.run-all == 'true'
|
||||
# runs-on: bigger-runner
|
||||
# timeout-minutes: 90
|
||||
# steps:
|
||||
# - name: Clone
|
||||
# uses: actions/checkout@v6
|
||||
# with:
|
||||
# submodules: true
|
||||
# - name: Dependencies
|
||||
# run: |
|
||||
# sudo apt-get update
|
||||
# sudo apt-get install -y --no-install-recommends \
|
||||
# make build-essential curl unzip ca-certificates git tar
|
||||
# - name: Setup Go
|
||||
# uses: actions/setup-go@v5
|
||||
# with:
|
||||
# go-version: '1.25.4'
|
||||
# - name: Free disk space
|
||||
# run: |
|
||||
# sudo rm -rf /usr/share/dotnet /opt/ghc /usr/local/lib/android /opt/hostedtoolcache/CodeQL || true
|
||||
# df -h
|
||||
# - name: Build vllm (cpu) backend image and run gRPC e2e tests
|
||||
# run: |
|
||||
# make test-extra-backend-vllm
|
||||
# tests-sglang-grpc is currently disabled in CI for the same reason as
|
||||
# tests-vllm-grpc: sglang's CPU kernel (sgl-kernel) uses __m512 AVX-512
|
||||
# intrinsics unconditionally in shm.cpp, so the from-source build
|
||||
# requires `-march=sapphirerapids` (already set in install.sh) and the
|
||||
# resulting binary SIGILLs at import on CPUs without AVX-512 VNNI/BF16.
|
||||
# The ubuntu-latest runner pool does not guarantee that ISA baseline.
|
||||
#
|
||||
# The test itself (tests/e2e-backends + make test-extra-backend-sglang)
|
||||
# is fully working and validated locally on a host with the right
|
||||
# SIMD baseline. Run it manually with:
|
||||
#
|
||||
# make test-extra-backend-sglang
|
||||
#
|
||||
# Re-enable this job once we have a self-hosted runner label with
|
||||
# guaranteed AVX-512 VNNI/BF16 support.
|
||||
#
|
||||
# tests-sglang-grpc:
|
||||
# needs: detect-changes
|
||||
# if: needs.detect-changes.outputs.sglang == 'true' || needs.detect-changes.outputs.run-all == 'true'
|
||||
# runs-on: bigger-runner
|
||||
# timeout-minutes: 90
|
||||
# steps:
|
||||
# - name: Clone
|
||||
# uses: actions/checkout@v6
|
||||
# with:
|
||||
# submodules: true
|
||||
# - name: Dependencies
|
||||
# run: |
|
||||
# sudo apt-get update
|
||||
# sudo apt-get install -y --no-install-recommends \
|
||||
# make build-essential curl unzip ca-certificates git tar
|
||||
# - name: Setup Go
|
||||
# uses: actions/setup-go@v5
|
||||
# with:
|
||||
# go-version: '1.25.4'
|
||||
# - name: Free disk space
|
||||
# run: |
|
||||
# sudo rm -rf /usr/share/dotnet /opt/ghc /usr/local/lib/android /opt/hostedtoolcache/CodeQL || true
|
||||
# df -h
|
||||
# - name: Build sglang (cpu) backend image and run gRPC e2e tests
|
||||
# run: |
|
||||
# make test-extra-backend-sglang
|
||||
tests-acestep-cpp:
|
||||
needs: detect-changes
|
||||
if: needs.detect-changes.outputs.acestep-cpp == 'true' || needs.detect-changes.outputs.run-all == 'true'
|
||||
|
|
@ -495,6 +664,38 @@ jobs:
|
|||
- name: Test acestep-cpp
|
||||
run: |
|
||||
make --jobs=5 --output-sync=target -C backend/go/acestep-cpp test
|
||||
tests-qwen3-tts-cpp:
|
||||
needs: detect-changes
|
||||
if: needs.detect-changes.outputs.qwen3-tts-cpp == 'true' || needs.detect-changes.outputs.run-all == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Clone
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
submodules: true
|
||||
- name: Dependencies
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y build-essential cmake curl libopenblas-dev ffmpeg
|
||||
- name: Setup Go
|
||||
uses: actions/setup-go@v5
|
||||
- name: Display Go version
|
||||
run: go version
|
||||
- name: Proto Dependencies
|
||||
run: |
|
||||
# Install protoc
|
||||
curl -L -s https://github.com/protocolbuffers/protobuf/releases/download/v26.1/protoc-26.1-linux-x86_64.zip -o protoc.zip && \
|
||||
unzip -j -d /usr/local/bin protoc.zip bin/protoc && \
|
||||
rm protoc.zip
|
||||
go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.34.2
|
||||
go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@1958fcbe2ca8bd93af633f11e97d44e567e945af
|
||||
PATH="$PATH:$HOME/go/bin" make protogen-go
|
||||
- name: Build qwen3-tts-cpp
|
||||
run: |
|
||||
make --jobs=5 --output-sync=target -C backend/go/qwen3-tts-cpp
|
||||
- name: Test qwen3-tts-cpp
|
||||
run: |
|
||||
make --jobs=5 --output-sync=target -C backend/go/qwen3-tts-cpp test
|
||||
tests-voxtral:
|
||||
needs: detect-changes
|
||||
if: needs.detect-changes.outputs.voxtral == 'true' || needs.detect-changes.outputs.run-all == 'true'
|
||||
|
|
@ -528,3 +729,25 @@ jobs:
|
|||
- name: Test voxtral
|
||||
run: |
|
||||
make --jobs=5 --output-sync=target -C backend/go/voxtral test
|
||||
tests-kokoros:
|
||||
needs: detect-changes
|
||||
if: needs.detect-changes.outputs.kokoros == 'true' || needs.detect-changes.outputs.run-all == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Clone
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
submodules: true
|
||||
- name: Dependencies
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y build-essential cmake pkg-config protobuf-compiler clang libclang-dev
|
||||
sudo apt-get install -y espeak-ng libespeak-ng-dev libsonic-dev libpcaudio-dev libopus-dev libssl-dev
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
|
||||
echo "$HOME/.cargo/bin" >> $GITHUB_PATH
|
||||
- name: Build kokoros
|
||||
run: |
|
||||
make -C backend/rust/kokoros kokoros-grpc
|
||||
- name: Test kokoros
|
||||
run: |
|
||||
make -C backend/rust/kokoros test
|
||||
|
|
|
|||
3
.gitmodules
vendored
3
.gitmodules
vendored
|
|
@ -1,3 +1,6 @@
|
|||
[submodule "docs/themes/hugo-theme-relearn"]
|
||||
path = docs/themes/hugo-theme-relearn
|
||||
url = https://github.com/McShelby/hugo-theme-relearn.git
|
||||
[submodule "backend/rust/kokoros/sources/Kokoros"]
|
||||
path = backend/rust/kokoros/sources/Kokoros
|
||||
url = https://github.com/lucasjinreal/Kokoros
|
||||
|
|
|
|||
16
AGENTS.md
16
AGENTS.md
|
|
@ -1,18 +1,32 @@
|
|||
# LocalAI Agent Instructions
|
||||
|
||||
This file is an index to detailed topic guides in the `.agents/` directory. Read the relevant file(s) for the task at hand — you don't need to load all of them.
|
||||
This file is the entry point for AI coding assistants (Claude Code, Cursor, Copilot, Codex, Aider, etc.) working on LocalAI. It is an index to detailed topic guides in the `.agents/` directory. Read the relevant file(s) for the task at hand — you don't need to load all of them.
|
||||
|
||||
Human contributors: see [CONTRIBUTING.md](CONTRIBUTING.md) for the development workflow.
|
||||
|
||||
## Policy for AI-Assisted Contributions
|
||||
|
||||
LocalAI follows the Linux kernel project's [guidelines for AI coding assistants](https://docs.kernel.org/process/coding-assistants.html). Before submitting AI-assisted code, read [.agents/ai-coding-assistants.md](.agents/ai-coding-assistants.md). Key rules:
|
||||
|
||||
- **No `Signed-off-by` from AI.** Only the human submitter may sign off on the Developer Certificate of Origin.
|
||||
- **No `Co-Authored-By: <AI>` trailers.** The human contributor owns the change.
|
||||
- **Use an `Assisted-by:` trailer** to attribute AI involvement. Format: `Assisted-by: AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2]`.
|
||||
- **The human submitter is responsible** for reviewing, testing, and understanding every line of generated code.
|
||||
|
||||
## Topics
|
||||
|
||||
| File | When to read |
|
||||
|------|-------------|
|
||||
| [.agents/ai-coding-assistants.md](.agents/ai-coding-assistants.md) | Policy for AI-assisted contributions — licensing, DCO, attribution |
|
||||
| [.agents/building-and-testing.md](.agents/building-and-testing.md) | Building the project, running tests, Docker builds for specific platforms |
|
||||
| [.agents/adding-backends.md](.agents/adding-backends.md) | Adding a new backend (Python, Go, or C++) — full step-by-step checklist |
|
||||
| [.agents/coding-style.md](.agents/coding-style.md) | Code style, editorconfig, logging, documentation conventions |
|
||||
| [.agents/llama-cpp-backend.md](.agents/llama-cpp-backend.md) | Working on the llama.cpp backend — architecture, updating, tool call parsing |
|
||||
| [.agents/vllm-backend.md](.agents/vllm-backend.md) | Working on the vLLM / vLLM-omni backends — native parsers, ChatDelta, CPU build, libnuma packaging, backend hooks |
|
||||
| [.agents/testing-mcp-apps.md](.agents/testing-mcp-apps.md) | Testing MCP Apps (interactive tool UIs) in the React UI |
|
||||
| [.agents/api-endpoints-and-auth.md](.agents/api-endpoints-and-auth.md) | Adding API endpoints, auth middleware, feature permissions, user access control |
|
||||
| [.agents/debugging-backends.md](.agents/debugging-backends.md) | Debugging runtime backend failures, dependency conflicts, rebuilding backends |
|
||||
| [.agents/adding-gallery-models.md](.agents/adding-gallery-models.md) | Adding GGUF models from HuggingFace to the model gallery |
|
||||
|
||||
## Quick Reference
|
||||
|
||||
|
|
|
|||
|
|
@ -13,6 +13,7 @@ Thank you for your interest in contributing to LocalAI! We appreciate your time
|
|||
- [Development Workflow](#development-workflow)
|
||||
- [Creating a Pull Request (PR)](#creating-a-pull-request-pr)
|
||||
- [Coding Guidelines](#coding-guidelines)
|
||||
- [AI Coding Assistants](#ai-coding-assistants)
|
||||
- [Testing](#testing)
|
||||
- [Documentation](#documentation)
|
||||
- [Community and Communication](#community-and-communication)
|
||||
|
|
@ -185,7 +186,7 @@ Before jumping into a PR for a massive feature or big change, it is preferred to
|
|||
|
||||
This project uses an [`.editorconfig`](.editorconfig) file to define formatting standards (indentation, line endings, charset, etc.). Please configure your editor to respect it.
|
||||
|
||||
For AI-assisted development, see [`CLAUDE.md`](CLAUDE.md) for agent-specific guidelines including build instructions and backend architecture details.
|
||||
For AI-assisted development, see [`AGENTS.md`](AGENTS.md) (or the equivalent [`CLAUDE.md`](CLAUDE.md) symlink) for agent-specific guidelines including build instructions and backend architecture details. Contributions produced with AI assistance must follow the rules in the [AI Coding Assistants](#ai-coding-assistants) section below.
|
||||
|
||||
### General Principles
|
||||
|
||||
|
|
@ -211,6 +212,26 @@ For AI-assisted development, see [`CLAUDE.md`](CLAUDE.md) for agent-specific gui
|
|||
- Reviewers will check for correctness, test coverage, adherence to these guidelines, and clarity of intent.
|
||||
- Be responsive to review feedback and keep discussions constructive.
|
||||
|
||||
## AI Coding Assistants
|
||||
|
||||
LocalAI follows the **same guidelines as the Linux kernel project** for AI-assisted contributions: <https://docs.kernel.org/process/coding-assistants.html>.
|
||||
|
||||
The full policy for this repository lives in [`.agents/ai-coding-assistants.md`](.agents/ai-coding-assistants.md). Summary:
|
||||
|
||||
- **AI agents MUST NOT add `Signed-off-by` tags.** Only humans can certify the Developer Certificate of Origin.
|
||||
- **AI agents MUST NOT add `Co-Authored-By` trailers** attributing themselves as co-authors.
|
||||
- **Attribute AI involvement with an `Assisted-by` trailer** in the commit message:
|
||||
|
||||
```
|
||||
Assisted-by: AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2]
|
||||
```
|
||||
|
||||
Example: `Assisted-by: Claude:claude-opus-4-7 golangci-lint`
|
||||
|
||||
Basic development tools (git, go, make, editors) should not be listed.
|
||||
- **The human submitter is responsible** for reviewing, testing, and fully understanding every line of AI-generated code — including verifying that any referenced APIs, flags, or file paths actually exist in the tree.
|
||||
- Contributions must remain compatible with LocalAI's **MIT License**.
|
||||
|
||||
## Testing
|
||||
|
||||
All new features and bug fixes should include test coverage. The project uses [Ginkgo](https://onsi.github.io/ginkgo/) as its test framework.
|
||||
|
|
|
|||
209
Makefile
209
Makefile
|
|
@ -1,5 +1,5 @@
|
|||
# Disable parallel execution for backend builds
|
||||
.NOTPARALLEL: backends/diffusers backends/llama-cpp backends/outetts backends/piper backends/stablediffusion-ggml backends/whisper backends/faster-whisper backends/silero-vad backends/local-store backends/huggingface backends/rfdetr backends/kitten-tts backends/kokoro backends/chatterbox backends/llama-cpp-darwin backends/neutts build-darwin-python-backend build-darwin-go-backend backends/mlx backends/diffuser-darwin backends/mlx-vlm backends/mlx-audio backends/mlx-distributed backends/stablediffusion-ggml-darwin backends/vllm backends/vllm-omni backends/moonshine backends/pocket-tts backends/qwen-tts backends/faster-qwen3-tts backends/qwen-asr backends/nemo backends/voxcpm backends/whisperx backends/ace-step backends/acestep-cpp backends/fish-speech backends/voxtral backends/opus backends/trl backends/llama-cpp-quantization
|
||||
.NOTPARALLEL: backends/diffusers backends/llama-cpp backends/turboquant backends/outetts backends/piper backends/stablediffusion-ggml backends/whisper backends/faster-whisper backends/silero-vad backends/local-store backends/huggingface backends/rfdetr backends/kitten-tts backends/kokoro backends/chatterbox backends/llama-cpp-darwin backends/neutts build-darwin-python-backend build-darwin-go-backend backends/mlx backends/diffuser-darwin backends/mlx-vlm backends/mlx-audio backends/mlx-distributed backends/stablediffusion-ggml-darwin backends/vllm backends/vllm-omni backends/sglang backends/moonshine backends/pocket-tts backends/qwen-tts backends/faster-qwen3-tts backends/qwen-asr backends/nemo backends/voxcpm backends/whisperx backends/ace-step backends/acestep-cpp backends/fish-speech backends/voxtral backends/opus backends/trl backends/llama-cpp-quantization backends/kokoros backends/sam3-cpp backends/qwen3-tts-cpp backends/tinygrad
|
||||
|
||||
GOCMD=go
|
||||
GOTEST=$(GOCMD) test
|
||||
|
|
@ -148,7 +148,6 @@ test-models/testmodel.ggml:
|
|||
mkdir -p test-dir
|
||||
wget -q https://huggingface.co/mradermacher/gpt2-alpaca-gpt4-GGUF/resolve/main/gpt2-alpaca-gpt4.Q4_K_M.gguf -O test-models/testmodel.ggml
|
||||
wget -q https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.en.bin -O test-models/whisper-en
|
||||
wget -q https://huggingface.co/mudler/all-MiniLM-L6-v2/resolve/main/ggml-model-q4_0.bin -O test-models/bert
|
||||
wget -q https://cdn.openai.com/whisper/draft-20220913a/micro-machines.wav -O test-dir/audio.wav
|
||||
cp tests/models_fixtures/* test-models
|
||||
|
||||
|
|
@ -420,6 +419,7 @@ prepare-test-extra: protogen-python
|
|||
$(MAKE) -C backend/python/chatterbox
|
||||
$(MAKE) -C backend/python/vllm
|
||||
$(MAKE) -C backend/python/vllm-omni
|
||||
$(MAKE) -C backend/python/sglang
|
||||
$(MAKE) -C backend/python/vibevoice
|
||||
$(MAKE) -C backend/python/moonshine
|
||||
$(MAKE) -C backend/python/pocket-tts
|
||||
|
|
@ -429,9 +429,12 @@ prepare-test-extra: protogen-python
|
|||
$(MAKE) -C backend/python/qwen-asr
|
||||
$(MAKE) -C backend/python/nemo
|
||||
$(MAKE) -C backend/python/voxcpm
|
||||
$(MAKE) -C backend/python/faster-whisper
|
||||
$(MAKE) -C backend/python/whisperx
|
||||
$(MAKE) -C backend/python/ace-step
|
||||
$(MAKE) -C backend/python/trl
|
||||
$(MAKE) -C backend/python/tinygrad
|
||||
$(MAKE) -C backend/rust/kokoros kokoros-grpc
|
||||
|
||||
test-extra: prepare-test-extra
|
||||
$(MAKE) -C backend/python/transformers test
|
||||
|
|
@ -449,9 +452,183 @@ test-extra: prepare-test-extra
|
|||
$(MAKE) -C backend/python/qwen-asr test
|
||||
$(MAKE) -C backend/python/nemo test
|
||||
$(MAKE) -C backend/python/voxcpm test
|
||||
$(MAKE) -C backend/python/faster-whisper test
|
||||
$(MAKE) -C backend/python/whisperx test
|
||||
$(MAKE) -C backend/python/ace-step test
|
||||
$(MAKE) -C backend/python/trl test
|
||||
$(MAKE) -C backend/python/tinygrad test
|
||||
$(MAKE) -C backend/rust/kokoros test
|
||||
|
||||
##
|
||||
## End-to-end gRPC tests that exercise a built backend container image.
|
||||
##
|
||||
## The test suite in tests/e2e-backends is backend-agnostic. You drive it via env
|
||||
## vars (see tests/e2e-backends/backend_test.go for the full list) and the
|
||||
## capability-driven harness picks which gRPC RPCs to exercise:
|
||||
##
|
||||
## BACKEND_IMAGE Required. Docker image to test, e.g. local-ai-backend:llama-cpp.
|
||||
## BACKEND_TEST_MODEL_URL URL of a model file to download and load.
|
||||
## BACKEND_TEST_MODEL_FILE Path to an already-downloaded model (skips download).
|
||||
## BACKEND_TEST_MODEL_NAME HuggingFace repo id (e.g. Qwen/Qwen2.5-0.5B-Instruct).
|
||||
## Use this instead of MODEL_URL for backends that
|
||||
## resolve HF model ids natively (vllm, vllm-omni).
|
||||
## BACKEND_TEST_CAPS Comma-separated capabilities, default "health,load,predict,stream".
|
||||
## Adds "tools" to exercise ChatDelta tool call extraction.
|
||||
## BACKEND_TEST_PROMPT Override the prompt used in predict/stream specs.
|
||||
## BACKEND_TEST_OPTIONS Comma-separated Options[] entries forwarded to LoadModel,
|
||||
## e.g. "tool_parser:hermes,reasoning_parser:qwen3".
|
||||
##
|
||||
## Direct usage (image already built, no docker-build-* dependency):
|
||||
##
|
||||
## make test-extra-backend BACKEND_IMAGE=local-ai-backend:llama-cpp \
|
||||
## BACKEND_TEST_MODEL_URL=https://.../model.gguf
|
||||
##
|
||||
## Convenience wrappers below build a specific backend image first, then run the
|
||||
## suite against it.
|
||||
##
|
||||
BACKEND_TEST_MODEL_URL?=https://huggingface.co/Qwen/Qwen3-0.6B-GGUF/resolve/main/Qwen3-0.6B-Q8_0.gguf
|
||||
|
||||
## Generic target — runs the suite against whatever BACKEND_IMAGE points at.
|
||||
## Depends on protogen-go so pkg/grpc/proto is generated before `go test`.
|
||||
test-extra-backend: protogen-go
|
||||
@test -n "$$BACKEND_IMAGE" || { echo "BACKEND_IMAGE must be set" >&2; exit 1; }
|
||||
BACKEND_IMAGE="$$BACKEND_IMAGE" \
|
||||
BACKEND_TEST_MODEL_URL="$${BACKEND_TEST_MODEL_URL:-$(BACKEND_TEST_MODEL_URL)}" \
|
||||
BACKEND_TEST_MODEL_FILE="$$BACKEND_TEST_MODEL_FILE" \
|
||||
BACKEND_TEST_MODEL_NAME="$$BACKEND_TEST_MODEL_NAME" \
|
||||
BACKEND_TEST_MMPROJ_URL="$$BACKEND_TEST_MMPROJ_URL" \
|
||||
BACKEND_TEST_MMPROJ_FILE="$$BACKEND_TEST_MMPROJ_FILE" \
|
||||
BACKEND_TEST_AUDIO_URL="$$BACKEND_TEST_AUDIO_URL" \
|
||||
BACKEND_TEST_AUDIO_FILE="$$BACKEND_TEST_AUDIO_FILE" \
|
||||
BACKEND_TEST_CAPS="$$BACKEND_TEST_CAPS" \
|
||||
BACKEND_TEST_PROMPT="$$BACKEND_TEST_PROMPT" \
|
||||
BACKEND_TEST_OPTIONS="$$BACKEND_TEST_OPTIONS" \
|
||||
BACKEND_TEST_TOOL_PROMPT="$$BACKEND_TEST_TOOL_PROMPT" \
|
||||
BACKEND_TEST_TOOL_NAME="$$BACKEND_TEST_TOOL_NAME" \
|
||||
BACKEND_TEST_CACHE_TYPE_K="$$BACKEND_TEST_CACHE_TYPE_K" \
|
||||
BACKEND_TEST_CACHE_TYPE_V="$$BACKEND_TEST_CACHE_TYPE_V" \
|
||||
go test -v -timeout 30m ./tests/e2e-backends/...
|
||||
|
||||
## Convenience wrappers: build the image, then exercise it.
|
||||
test-extra-backend-llama-cpp: docker-build-llama-cpp
|
||||
BACKEND_IMAGE=local-ai-backend:llama-cpp $(MAKE) test-extra-backend
|
||||
|
||||
test-extra-backend-ik-llama-cpp: docker-build-ik-llama-cpp
|
||||
BACKEND_IMAGE=local-ai-backend:ik-llama-cpp $(MAKE) test-extra-backend
|
||||
|
||||
## turboquant: exercises the llama.cpp-fork backend with the fork's
|
||||
## *TurboQuant-specific* KV-cache types (turbo3 for both K and V). turbo3
|
||||
## is what makes this backend distinct from stock llama-cpp — picking q8_0
|
||||
## here would only test the standard llama.cpp code path that the upstream
|
||||
## llama-cpp backend already covers. The fork auto-enables flash_attention
|
||||
## when turbo3/turbo4 are active, so we don't need to set it explicitly.
|
||||
test-extra-backend-turboquant: docker-build-turboquant
|
||||
BACKEND_IMAGE=local-ai-backend:turboquant \
|
||||
BACKEND_TEST_CACHE_TYPE_K=q8_0 \
|
||||
BACKEND_TEST_CACHE_TYPE_V=turbo3 \
|
||||
$(MAKE) test-extra-backend
|
||||
|
||||
## Audio transcription wrapper for the llama-cpp backend.
|
||||
## Drives the new AudioTranscription / AudioTranscriptionStream RPCs against
|
||||
## ggml-org/Qwen3-ASR-0.6B-GGUF (a small ASR model that requires its mmproj
|
||||
## audio encoder companion). The audio fixture is a short public-domain
|
||||
## "jfk.wav" clip ggml-org bundles with whisper.cpp's CI assets.
|
||||
test-extra-backend-llama-cpp-transcription: docker-build-llama-cpp
|
||||
BACKEND_IMAGE=local-ai-backend:llama-cpp \
|
||||
BACKEND_TEST_MODEL_URL=https://huggingface.co/ggml-org/Qwen3-ASR-0.6B-GGUF/resolve/main/Qwen3-ASR-0.6B-Q8_0.gguf \
|
||||
BACKEND_TEST_MMPROJ_URL=https://huggingface.co/ggml-org/Qwen3-ASR-0.6B-GGUF/resolve/main/mmproj-Qwen3-ASR-0.6B-Q8_0.gguf \
|
||||
BACKEND_TEST_AUDIO_URL=https://github.com/ggml-org/whisper.cpp/raw/master/samples/jfk.wav \
|
||||
BACKEND_TEST_CAPS=health,load,transcription \
|
||||
$(MAKE) test-extra-backend
|
||||
|
||||
## vllm is resolved from a HuggingFace model id (no file download) and
|
||||
## exercises Predict + streaming + tool-call extraction via the hermes parser.
|
||||
## Requires a host CPU with the SIMD instructions the prebuilt vllm CPU
|
||||
## wheel was compiled against (AVX-512 VNNI/BF16); older CPUs will SIGILL
|
||||
## on import — on CI this means using the bigger-runner label.
|
||||
test-extra-backend-vllm: docker-build-vllm
|
||||
BACKEND_IMAGE=local-ai-backend:vllm \
|
||||
BACKEND_TEST_MODEL_NAME=Qwen/Qwen2.5-0.5B-Instruct \
|
||||
BACKEND_TEST_CAPS=health,load,predict,stream,tools \
|
||||
BACKEND_TEST_OPTIONS=tool_parser:hermes \
|
||||
$(MAKE) test-extra-backend
|
||||
|
||||
## tinygrad mirrors the vllm target (same model, same caps, same parser) so
|
||||
## the two backends are directly comparable. The LLM path covers Predict,
|
||||
## streaming and native tool-call extraction. Companion targets below cover
|
||||
## embeddings, Stable Diffusion and Whisper — run them individually or via
|
||||
## the `test-extra-backend-tinygrad-all` aggregate.
|
||||
test-extra-backend-tinygrad: docker-build-tinygrad
|
||||
BACKEND_IMAGE=local-ai-backend:tinygrad \
|
||||
BACKEND_TEST_MODEL_NAME=Qwen/Qwen3-0.6B \
|
||||
BACKEND_TEST_CAPS=health,load,predict,stream,tools \
|
||||
BACKEND_TEST_OPTIONS=tool_parser:hermes \
|
||||
$(MAKE) test-extra-backend
|
||||
|
||||
## tinygrad — embeddings via LLM last-hidden-state pooling. Reuses the same
|
||||
## Qwen3-0.6B as the chat target so we don't need a separate BERT vendor;
|
||||
## the Embedding RPC mean-pools and L2-normalizes the last-layer hidden
|
||||
## state.
|
||||
test-extra-backend-tinygrad-embeddings: docker-build-tinygrad
|
||||
BACKEND_IMAGE=local-ai-backend:tinygrad \
|
||||
BACKEND_TEST_MODEL_NAME=Qwen/Qwen3-0.6B \
|
||||
BACKEND_TEST_CAPS=health,load,embeddings \
|
||||
$(MAKE) test-extra-backend
|
||||
|
||||
## tinygrad — Stable Diffusion 1.5. The original CompVis/runwayml repos have
|
||||
## been gated, so we use the community-maintained mirror at
|
||||
## stable-diffusion-v1-5/stable-diffusion-v1-5 with the EMA-only pruned
|
||||
## checkpoint (~4.3GB). Step count is kept low (4) so a CPU-only run finishes
|
||||
## in a few minutes; bump BACKEND_TEST_IMAGE_STEPS for higher quality.
|
||||
test-extra-backend-tinygrad-sd: docker-build-tinygrad
|
||||
BACKEND_IMAGE=local-ai-backend:tinygrad \
|
||||
BACKEND_TEST_MODEL_NAME=stable-diffusion-v1-5/stable-diffusion-v1-5 \
|
||||
BACKEND_TEST_CAPS=health,load,image \
|
||||
$(MAKE) test-extra-backend
|
||||
|
||||
## tinygrad — Whisper. Loads OpenAI's tiny.en checkpoint (smallest at ~75MB)
|
||||
## from the original azure CDN through tinygrad's `fetch` helper, and
|
||||
## transcribes the canonical jfk.wav fixture from whisper.cpp's CI samples.
|
||||
## Exercises both AudioTranscription and AudioTranscriptionStream.
|
||||
test-extra-backend-tinygrad-whisper: docker-build-tinygrad
|
||||
BACKEND_IMAGE=local-ai-backend:tinygrad \
|
||||
BACKEND_TEST_MODEL_NAME=openai/whisper-tiny.en \
|
||||
BACKEND_TEST_AUDIO_URL=https://github.com/ggml-org/whisper.cpp/raw/master/samples/jfk.wav \
|
||||
BACKEND_TEST_CAPS=health,load,transcription \
|
||||
$(MAKE) test-extra-backend
|
||||
|
||||
test-extra-backend-tinygrad-all: \
|
||||
test-extra-backend-tinygrad \
|
||||
test-extra-backend-tinygrad-embeddings \
|
||||
test-extra-backend-tinygrad-sd \
|
||||
test-extra-backend-tinygrad-whisper
|
||||
|
||||
## sglang mirrors the vllm setup: HuggingFace model id, same tiny Qwen,
|
||||
## tool-call extraction via sglang's native qwen parser. CPU builds use
|
||||
## sglang's upstream pyproject_cpu.toml recipe (see backend/python/sglang/install.sh).
|
||||
test-extra-backend-sglang: docker-build-sglang
|
||||
BACKEND_IMAGE=local-ai-backend:sglang \
|
||||
BACKEND_TEST_MODEL_NAME=Qwen/Qwen2.5-0.5B-Instruct \
|
||||
BACKEND_TEST_CAPS=health,load,predict,stream,tools \
|
||||
BACKEND_TEST_OPTIONS=tool_parser:qwen \
|
||||
$(MAKE) test-extra-backend
|
||||
|
||||
|
||||
## mlx is Apple-Silicon-first — the MLX backend auto-detects the right tool
|
||||
## parser from the chat template, so no tool_parser: option is needed (it
|
||||
## would be ignored at runtime). Run this on macOS / arm64 with Metal; the
|
||||
## Linux/CPU mlx variant is untested in CI.
|
||||
test-extra-backend-mlx: docker-build-mlx
|
||||
BACKEND_IMAGE=local-ai-backend:mlx \
|
||||
BACKEND_TEST_MODEL_NAME=mlx-community/Qwen2.5-0.5B-Instruct-4bit \
|
||||
BACKEND_TEST_CAPS=health,load,predict,stream,tools \
|
||||
$(MAKE) test-extra-backend
|
||||
|
||||
test-extra-backend-mlx-vlm: docker-build-mlx-vlm
|
||||
BACKEND_IMAGE=local-ai-backend:mlx-vlm \
|
||||
BACKEND_TEST_MODEL_NAME=mlx-community/Qwen2.5-0.5B-Instruct-4bit \
|
||||
BACKEND_TEST_CAPS=health,load,predict,stream,tools \
|
||||
$(MAKE) test-extra-backend
|
||||
|
||||
DOCKER_IMAGE?=local-ai
|
||||
IMAGE_TYPE?=core
|
||||
|
|
@ -546,6 +723,11 @@ backend-images:
|
|||
# Backend metadata: BACKEND_NAME | DOCKERFILE_TYPE | BUILD_CONTEXT | PROGRESS_FLAG | NEEDS_BACKEND_ARG
|
||||
# llama-cpp is special - uses llama-cpp Dockerfile and doesn't need BACKEND arg
|
||||
BACKEND_LLAMA_CPP = llama-cpp|llama-cpp|.|false|false
|
||||
# ik-llama-cpp is a fork of llama.cpp with superior CPU performance
|
||||
BACKEND_IK_LLAMA_CPP = ik-llama-cpp|ik-llama-cpp|.|false|false
|
||||
# turboquant is a llama.cpp fork with TurboQuant KV-cache quantization.
|
||||
# Reuses backend/cpp/llama-cpp grpc-server sources via a thin wrapper Makefile.
|
||||
BACKEND_TURBOQUANT = turboquant|turboquant|.|false|false
|
||||
|
||||
# Golang backends
|
||||
BACKEND_PIPER = piper|golang|.|false|true
|
||||
|
|
@ -556,6 +738,7 @@ BACKEND_STABLEDIFFUSION_GGML = stablediffusion-ggml|golang|.|--progress=plain|tr
|
|||
BACKEND_WHISPER = whisper|golang|.|false|true
|
||||
BACKEND_VOXTRAL = voxtral|golang|.|false|true
|
||||
BACKEND_ACESTEP_CPP = acestep-cpp|golang|.|false|true
|
||||
BACKEND_QWEN3_TTS_CPP = qwen3-tts-cpp|golang|.|false|true
|
||||
BACKEND_OPUS = opus|golang|.|false|true
|
||||
|
||||
# Python backends with root context
|
||||
|
|
@ -570,6 +753,7 @@ BACKEND_NEUTTS = neutts|python|.|false|true
|
|||
BACKEND_KOKORO = kokoro|python|.|false|true
|
||||
BACKEND_VLLM = vllm|python|.|false|true
|
||||
BACKEND_VLLM_OMNI = vllm-omni|python|.|false|true
|
||||
BACKEND_SGLANG = sglang|python|.|false|true
|
||||
BACKEND_DIFFUSERS = diffusers|python|.|--progress=plain|true
|
||||
BACKEND_CHATTERBOX = chatterbox|python|.|false|true
|
||||
BACKEND_VIBEVOICE = vibevoice|python|.|--progress=plain|true
|
||||
|
|
@ -583,9 +767,18 @@ BACKEND_NEMO = nemo|python|.|false|true
|
|||
BACKEND_VOXCPM = voxcpm|python|.|false|true
|
||||
BACKEND_WHISPERX = whisperx|python|.|false|true
|
||||
BACKEND_ACE_STEP = ace-step|python|.|false|true
|
||||
BACKEND_MLX = mlx|python|.|false|true
|
||||
BACKEND_MLX_VLM = mlx-vlm|python|.|false|true
|
||||
BACKEND_MLX_DISTRIBUTED = mlx-distributed|python|./|false|true
|
||||
BACKEND_TRL = trl|python|.|false|true
|
||||
BACKEND_LLAMA_CPP_QUANTIZATION = llama-cpp-quantization|python|.|false|true
|
||||
BACKEND_TINYGRAD = tinygrad|python|.|false|true
|
||||
|
||||
# Rust backends
|
||||
BACKEND_KOKOROS = kokoros|rust|.|false|true
|
||||
|
||||
# C++ backends (Go wrapper with purego)
|
||||
BACKEND_SAM3_CPP = sam3-cpp|golang|.|false|true
|
||||
|
||||
# Helper function to build docker image for a backend
|
||||
# Usage: $(call docker-build-backend,BACKEND_NAME,DOCKERFILE_TYPE,BUILD_CONTEXT,PROGRESS_FLAG,NEEDS_BACKEND_ARG)
|
||||
|
|
@ -597,6 +790,7 @@ define docker-build-backend
|
|||
--build-arg CUDA_MINOR_VERSION=$(CUDA_MINOR_VERSION) \
|
||||
--build-arg UBUNTU_VERSION=$(UBUNTU_VERSION) \
|
||||
--build-arg UBUNTU_CODENAME=$(UBUNTU_CODENAME) \
|
||||
$(if $(FROM_SOURCE),--build-arg FROM_SOURCE=$(FROM_SOURCE)) \
|
||||
$(if $(filter true,$(5)),--build-arg BACKEND=$(1)) \
|
||||
-t local-ai-backend:$(1) -f backend/Dockerfile.$(2) $(3)
|
||||
endef
|
||||
|
|
@ -609,6 +803,8 @@ endef
|
|||
|
||||
# Generate all docker-build targets
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_LLAMA_CPP)))
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_IK_LLAMA_CPP)))
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_TURBOQUANT)))
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_PIPER)))
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_LOCAL_STORE)))
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_HUGGINGFACE)))
|
||||
|
|
@ -628,6 +824,7 @@ $(eval $(call generate-docker-build-target,$(BACKEND_NEUTTS)))
|
|||
$(eval $(call generate-docker-build-target,$(BACKEND_KOKORO)))
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_VLLM)))
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_VLLM_OMNI)))
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_SGLANG)))
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_DIFFUSERS)))
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_CHATTERBOX)))
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_VIBEVOICE)))
|
||||
|
|
@ -642,15 +839,21 @@ $(eval $(call generate-docker-build-target,$(BACKEND_VOXCPM)))
|
|||
$(eval $(call generate-docker-build-target,$(BACKEND_WHISPERX)))
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_ACE_STEP)))
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_ACESTEP_CPP)))
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_QWEN3_TTS_CPP)))
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_MLX)))
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_MLX_VLM)))
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_MLX_DISTRIBUTED)))
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_TRL)))
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_LLAMA_CPP_QUANTIZATION)))
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_TINYGRAD)))
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_KOKOROS)))
|
||||
$(eval $(call generate-docker-build-target,$(BACKEND_SAM3_CPP)))
|
||||
|
||||
# Pattern rule for docker-save targets
|
||||
docker-save-%: backend-images
|
||||
docker save local-ai-backend:$* -o backend-images/$*.tar
|
||||
|
||||
docker-build-backends: docker-build-llama-cpp docker-build-rerankers docker-build-vllm docker-build-vllm-omni docker-build-transformers docker-build-outetts docker-build-diffusers docker-build-kokoro docker-build-faster-whisper docker-build-coqui docker-build-chatterbox docker-build-vibevoice docker-build-moonshine docker-build-pocket-tts docker-build-qwen-tts docker-build-fish-speech docker-build-faster-qwen3-tts docker-build-qwen-asr docker-build-nemo docker-build-voxcpm docker-build-whisperx docker-build-ace-step docker-build-acestep-cpp docker-build-voxtral docker-build-mlx-distributed docker-build-trl docker-build-llama-cpp-quantization
|
||||
docker-build-backends: docker-build-llama-cpp docker-build-ik-llama-cpp docker-build-turboquant docker-build-rerankers docker-build-vllm docker-build-vllm-omni docker-build-sglang docker-build-transformers docker-build-outetts docker-build-diffusers docker-build-kokoro docker-build-faster-whisper docker-build-coqui docker-build-chatterbox docker-build-vibevoice docker-build-moonshine docker-build-pocket-tts docker-build-qwen-tts docker-build-fish-speech docker-build-faster-qwen3-tts docker-build-qwen-asr docker-build-nemo docker-build-voxcpm docker-build-whisperx docker-build-ace-step docker-build-acestep-cpp docker-build-voxtral docker-build-mlx-distributed docker-build-trl docker-build-llama-cpp-quantization docker-build-tinygrad docker-build-kokoros docker-build-sam3-cpp docker-build-qwen3-tts-cpp
|
||||
|
||||
########################################################
|
||||
### Mock Backend for E2E Tests
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@
|
|||
**LocalAI** is the open-source AI engine. Run any model - LLMs, vision, voice, image, video - on any hardware. No GPU required.
|
||||
|
||||
- **Drop-in API compatibility** — OpenAI, Anthropic, ElevenLabs APIs
|
||||
- **35+ backends** — llama.cpp, vLLM, transformers, whisper, diffusers, MLX...
|
||||
- **36+ backends** — llama.cpp, vLLM, transformers, whisper, diffusers, MLX...
|
||||
- **Any hardware** — NVIDIA, AMD, Intel, Apple Silicon, Vulkan, or CPU-only
|
||||
- **Multi-user ready** — API key auth, user quotas, role-based access
|
||||
- **Built-in AI agents** — autonomous agents with tool use, RAG, MCP, and skills
|
||||
|
|
@ -185,7 +185,7 @@ For older news and full release notes, see [GitHub Releases](https://github.com/
|
|||
|
||||
## Supported Backends & Acceleration
|
||||
|
||||
LocalAI supports **35+ backends** including llama.cpp, vLLM, transformers, whisper.cpp, diffusers, MLX, MLX-VLM, and many more. Hardware acceleration is available for **NVIDIA** (CUDA 12/13), **AMD** (ROCm), **Intel** (oneAPI/SYCL), **Apple Silicon** (Metal), **Vulkan**, and **NVIDIA Jetson** (L4T). All backends can be installed on-the-fly from the [Backend Gallery](https://localai.io/backends/).
|
||||
LocalAI supports **36+ backends** including llama.cpp, vLLM, transformers, whisper.cpp, diffusers, MLX, MLX-VLM, and many more. Hardware acceleration is available for **NVIDIA** (CUDA 12/13), **AMD** (ROCm), **Intel** (oneAPI/SYCL), **Apple Silicon** (Metal), **Vulkan**, and **NVIDIA Jetson** (L4T). All backends can be installed on-the-fly from the [Backend Gallery](https://localai.io/backends/).
|
||||
|
||||
See the full [Backend & Model Compatibility Table](https://localai.io/model-compatibility/) and [GPU Acceleration guide](https://localai.io/features/gpu-acceleration/).
|
||||
|
||||
|
|
@ -196,6 +196,7 @@ See the full [Backend & Model Compatibility Table](https://localai.io/model-comp
|
|||
- [Build from source](https://localai.io/basics/build/)
|
||||
- [Kubernetes installation](https://localai.io/basics/getting_started/#run-localai-in-kubernetes)
|
||||
- [Integrations & community projects](https://localai.io/docs/integrations/)
|
||||
- [Installation video walkthrough](https://www.youtube.com/watch?v=cMVNnlqwfw4)
|
||||
- [Media & blog posts](https://localai.io/basics/news/#media-blogs-social)
|
||||
- [Examples](https://github.com/mudler/LocalAI-examples)
|
||||
|
||||
|
|
|
|||
281
backend/Dockerfile.ik-llama-cpp
Normal file
281
backend/Dockerfile.ik-llama-cpp
Normal file
|
|
@ -0,0 +1,281 @@
|
|||
ARG BASE_IMAGE=ubuntu:24.04
|
||||
ARG GRPC_BASE_IMAGE=${BASE_IMAGE}
|
||||
|
||||
|
||||
# The grpc target does one thing, it builds and installs GRPC. This is in it's own layer so that it can be effectively cached by CI.
|
||||
# You probably don't need to change anything here, and if you do, make sure that CI is adjusted so that the cache continues to work.
|
||||
FROM ${GRPC_BASE_IMAGE} AS grpc
|
||||
|
||||
# This is a bit of a hack, but it's required in order to be able to effectively cache this layer in CI
|
||||
ARG GRPC_MAKEFLAGS="-j4 -Otarget"
|
||||
ARG GRPC_VERSION=v1.65.0
|
||||
ARG CMAKE_FROM_SOURCE=false
|
||||
# CUDA Toolkit 13.x compatibility: CMake 3.31.9+ fixes toolchain detection/arch table issues
|
||||
ARG CMAKE_VERSION=3.31.10
|
||||
|
||||
ENV MAKEFLAGS=${GRPC_MAKEFLAGS}
|
||||
|
||||
WORKDIR /build
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
ca-certificates \
|
||||
build-essential curl libssl-dev \
|
||||
git wget && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install CMake (the version in 22.04 is too old)
|
||||
RUN <<EOT bash
|
||||
if [ "${CMAKE_FROM_SOURCE}" = "true" ]; then
|
||||
curl -L -s https://github.com/Kitware/CMake/releases/download/v${CMAKE_VERSION}/cmake-${CMAKE_VERSION}.tar.gz -o cmake.tar.gz && tar xvf cmake.tar.gz && cd cmake-${CMAKE_VERSION} && ./configure && make && make install
|
||||
else
|
||||
apt-get update && \
|
||||
apt-get install -y \
|
||||
cmake && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
fi
|
||||
EOT
|
||||
|
||||
# We install GRPC to a different prefix here so that we can copy in only the build artifacts later
|
||||
# saves several hundred MB on the final docker image size vs copying in the entire GRPC source tree
|
||||
# and running make install in the target container
|
||||
RUN git clone --recurse-submodules --jobs 4 -b ${GRPC_VERSION} --depth 1 --shallow-submodules https://github.com/grpc/grpc && \
|
||||
mkdir -p /build/grpc/cmake/build && \
|
||||
cd /build/grpc/cmake/build && \
|
||||
sed -i "216i\ TESTONLY" "../../third_party/abseil-cpp/absl/container/CMakeLists.txt" && \
|
||||
cmake -DgRPC_INSTALL=ON -DgRPC_BUILD_TESTS=OFF -DCMAKE_INSTALL_PREFIX:PATH=/opt/grpc ../.. && \
|
||||
make && \
|
||||
make install && \
|
||||
rm -rf /build
|
||||
|
||||
FROM ${BASE_IMAGE} AS builder
|
||||
ARG CMAKE_FROM_SOURCE=false
|
||||
ARG CMAKE_VERSION=3.31.10
|
||||
# We can target specific CUDA ARCHITECTURES like --build-arg CUDA_DOCKER_ARCH='75;86;89;120'
|
||||
ARG CUDA_DOCKER_ARCH
|
||||
ENV CUDA_DOCKER_ARCH=${CUDA_DOCKER_ARCH}
|
||||
ARG CMAKE_ARGS
|
||||
ENV CMAKE_ARGS=${CMAKE_ARGS}
|
||||
ARG BACKEND=rerankers
|
||||
ARG BUILD_TYPE
|
||||
ENV BUILD_TYPE=${BUILD_TYPE}
|
||||
ARG CUDA_MAJOR_VERSION
|
||||
ARG CUDA_MINOR_VERSION
|
||||
ARG SKIP_DRIVERS=false
|
||||
ENV CUDA_MAJOR_VERSION=${CUDA_MAJOR_VERSION}
|
||||
ENV CUDA_MINOR_VERSION=${CUDA_MINOR_VERSION}
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
ARG TARGETARCH
|
||||
ARG TARGETVARIANT
|
||||
ARG GO_VERSION=1.25.4
|
||||
ARG UBUNTU_VERSION=2404
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
build-essential \
|
||||
ccache git \
|
||||
ca-certificates \
|
||||
make \
|
||||
pkg-config libcurl4-openssl-dev \
|
||||
curl unzip \
|
||||
libssl-dev wget && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Cuda
|
||||
ENV PATH=/usr/local/cuda/bin:${PATH}
|
||||
|
||||
# HipBLAS requirements
|
||||
ENV PATH=/opt/rocm/bin:${PATH}
|
||||
|
||||
|
||||
# Vulkan requirements
|
||||
RUN <<EOT bash
|
||||
if [ "${BUILD_TYPE}" = "vulkan" ] && [ "${SKIP_DRIVERS}" = "false" ]; then
|
||||
apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
software-properties-common pciutils wget gpg-agent && \
|
||||
apt-get install -y libglm-dev cmake libxcb-dri3-0 libxcb-present0 libpciaccess0 \
|
||||
libpng-dev libxcb-keysyms1-dev libxcb-dri3-dev libx11-dev g++ gcc \
|
||||
libwayland-dev libxrandr-dev libxcb-randr0-dev libxcb-ewmh-dev \
|
||||
git python-is-python3 bison libx11-xcb-dev liblz4-dev libzstd-dev \
|
||||
ocaml-core ninja-build pkg-config libxml2-dev wayland-protocols python3-jsonschema \
|
||||
clang-format qtbase5-dev qt6-base-dev libxcb-glx0-dev sudo xz-utils
|
||||
if [ "amd64" = "$TARGETARCH" ]; then
|
||||
wget "https://sdk.lunarg.com/sdk/download/1.4.335.0/linux/vulkansdk-linux-x86_64-1.4.335.0.tar.xz" && \
|
||||
tar -xf vulkansdk-linux-x86_64-1.4.335.0.tar.xz && \
|
||||
rm vulkansdk-linux-x86_64-1.4.335.0.tar.xz && \
|
||||
mkdir -p /opt/vulkan-sdk && \
|
||||
mv 1.4.335.0 /opt/vulkan-sdk/ && \
|
||||
cd /opt/vulkan-sdk/1.4.335.0 && \
|
||||
./vulkansdk --no-deps --maxjobs \
|
||||
vulkan-loader \
|
||||
vulkan-validationlayers \
|
||||
vulkan-extensionlayer \
|
||||
vulkan-tools \
|
||||
shaderc && \
|
||||
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/bin/* /usr/bin/ && \
|
||||
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/lib/* /usr/lib/x86_64-linux-gnu/ && \
|
||||
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/include/* /usr/include/ && \
|
||||
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/share/* /usr/share/ && \
|
||||
rm -rf /opt/vulkan-sdk
|
||||
fi
|
||||
if [ "arm64" = "$TARGETARCH" ]; then
|
||||
mkdir vulkan && cd vulkan && \
|
||||
curl -L -o vulkan-sdk.tar.xz https://github.com/mudler/vulkan-sdk-arm/releases/download/1.4.335.0/vulkansdk-ubuntu-24.04-arm-1.4.335.0.tar.xz && \
|
||||
tar -xvf vulkan-sdk.tar.xz && \
|
||||
rm vulkan-sdk.tar.xz && \
|
||||
cd 1.4.335.0 && \
|
||||
cp -rfv aarch64/bin/* /usr/bin/ && \
|
||||
cp -rfv aarch64/lib/* /usr/lib/aarch64-linux-gnu/ && \
|
||||
cp -rfv aarch64/include/* /usr/include/ && \
|
||||
cp -rfv aarch64/share/* /usr/share/ && \
|
||||
cd ../.. && \
|
||||
rm -rf vulkan
|
||||
fi
|
||||
ldconfig && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
fi
|
||||
EOT
|
||||
|
||||
# CuBLAS requirements
|
||||
RUN <<EOT bash
|
||||
if ( [ "${BUILD_TYPE}" = "cublas" ] || [ "${BUILD_TYPE}" = "l4t" ] ) && [ "${SKIP_DRIVERS}" = "false" ]; then
|
||||
apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
software-properties-common pciutils
|
||||
if [ "amd64" = "$TARGETARCH" ]; then
|
||||
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${UBUNTU_VERSION}/x86_64/cuda-keyring_1.1-1_all.deb
|
||||
fi
|
||||
if [ "arm64" = "$TARGETARCH" ]; then
|
||||
if [ "${CUDA_MAJOR_VERSION}" = "13" ]; then
|
||||
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${UBUNTU_VERSION}/sbsa/cuda-keyring_1.1-1_all.deb
|
||||
else
|
||||
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${UBUNTU_VERSION}/arm64/cuda-keyring_1.1-1_all.deb
|
||||
fi
|
||||
fi
|
||||
dpkg -i cuda-keyring_1.1-1_all.deb && \
|
||||
rm -f cuda-keyring_1.1-1_all.deb && \
|
||||
apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
cuda-nvcc-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
|
||||
libcufft-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
|
||||
libcurand-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
|
||||
libcublas-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
|
||||
libcusparse-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
|
||||
libcusolver-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION}
|
||||
if [ "${CUDA_MAJOR_VERSION}" = "13" ] && [ "arm64" = "$TARGETARCH" ]; then
|
||||
apt-get install -y --no-install-recommends \
|
||||
libcufile-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} libcudnn9-cuda-${CUDA_MAJOR_VERSION} cuda-cupti-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} libnvjitlink-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION}
|
||||
fi
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
fi
|
||||
EOT
|
||||
|
||||
|
||||
# https://github.com/NVIDIA/Isaac-GR00T/issues/343
|
||||
RUN <<EOT bash
|
||||
if [ "${BUILD_TYPE}" = "cublas" ] && [ "${TARGETARCH}" = "arm64" ]; then
|
||||
wget https://developer.download.nvidia.com/compute/cudss/0.6.0/local_installers/cudss-local-tegra-repo-ubuntu${UBUNTU_VERSION}-0.6.0_0.6.0-1_arm64.deb && \
|
||||
dpkg -i cudss-local-tegra-repo-ubuntu${UBUNTU_VERSION}-0.6.0_0.6.0-1_arm64.deb && \
|
||||
cp /var/cudss-local-tegra-repo-ubuntu${UBUNTU_VERSION}-0.6.0/cudss-*-keyring.gpg /usr/share/keyrings/ && \
|
||||
apt-get update && apt-get -y install cudss cudss-cuda-${CUDA_MAJOR_VERSION} && \
|
||||
wget https://developer.download.nvidia.com/compute/nvpl/25.5/local_installers/nvpl-local-repo-ubuntu${UBUNTU_VERSION}-25.5_1.0-1_arm64.deb && \
|
||||
dpkg -i nvpl-local-repo-ubuntu${UBUNTU_VERSION}-25.5_1.0-1_arm64.deb && \
|
||||
cp /var/nvpl-local-repo-ubuntu${UBUNTU_VERSION}-25.5/nvpl-*-keyring.gpg /usr/share/keyrings/ && \
|
||||
apt-get update && apt-get install -y nvpl
|
||||
fi
|
||||
EOT
|
||||
|
||||
# If we are building with clblas support, we need the libraries for the builds
|
||||
RUN if [ "${BUILD_TYPE}" = "clblas" ] && [ "${SKIP_DRIVERS}" = "false" ]; then \
|
||||
apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
libclblast-dev && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/* \
|
||||
; fi
|
||||
|
||||
RUN if [ "${BUILD_TYPE}" = "hipblas" ] && [ "${SKIP_DRIVERS}" = "false" ]; then \
|
||||
apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
hipblas-dev \
|
||||
rocblas-dev && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/* && \
|
||||
# I have no idea why, but the ROCM lib packages don't trigger ldconfig after they install, which results in local-ai and others not being able
|
||||
# to locate the libraries. We run ldconfig ourselves to work around this packaging deficiency
|
||||
ldconfig \
|
||||
; fi
|
||||
|
||||
RUN echo "TARGETARCH: $TARGETARCH"
|
||||
|
||||
# We need protoc installed, and the version in 22.04 is too old. We will create one as part installing the GRPC build below
|
||||
# but that will also being in a newer version of absl which stablediffusion cannot compile with. This version of protoc is only
|
||||
# here so that we can generate the grpc code for the stablediffusion build
|
||||
RUN <<EOT bash
|
||||
if [ "amd64" = "$TARGETARCH" ]; then
|
||||
curl -L -s https://github.com/protocolbuffers/protobuf/releases/download/v27.1/protoc-27.1-linux-x86_64.zip -o protoc.zip && \
|
||||
unzip -j -d /usr/local/bin protoc.zip bin/protoc && \
|
||||
rm protoc.zip
|
||||
fi
|
||||
if [ "arm64" = "$TARGETARCH" ]; then
|
||||
curl -L -s https://github.com/protocolbuffers/protobuf/releases/download/v27.1/protoc-27.1-linux-aarch_64.zip -o protoc.zip && \
|
||||
unzip -j -d /usr/local/bin protoc.zip bin/protoc && \
|
||||
rm protoc.zip
|
||||
fi
|
||||
EOT
|
||||
|
||||
# Install CMake (the version in 22.04 is too old)
|
||||
RUN <<EOT bash
|
||||
if [ "${CMAKE_FROM_SOURCE}" = "true" ]; then
|
||||
curl -L -s https://github.com/Kitware/CMake/releases/download/v${CMAKE_VERSION}/cmake-${CMAKE_VERSION}.tar.gz -o cmake.tar.gz && tar xvf cmake.tar.gz && cd cmake-${CMAKE_VERSION} && ./configure && make && make install
|
||||
else
|
||||
apt-get update && \
|
||||
apt-get install -y \
|
||||
cmake && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
fi
|
||||
EOT
|
||||
|
||||
COPY --from=grpc /opt/grpc /usr/local
|
||||
|
||||
|
||||
COPY . /LocalAI
|
||||
|
||||
RUN <<'EOT' bash
|
||||
set -euxo pipefail
|
||||
|
||||
if [[ -n "${CUDA_DOCKER_ARCH:-}" ]]; then
|
||||
CUDA_ARCH_ESC="${CUDA_DOCKER_ARCH//;/\\;}"
|
||||
export CMAKE_ARGS="${CMAKE_ARGS:-} -DCMAKE_CUDA_ARCHITECTURES=${CUDA_ARCH_ESC}"
|
||||
echo "CMAKE_ARGS(env) = ${CMAKE_ARGS}"
|
||||
rm -rf /LocalAI/backend/cpp/ik-llama-cpp-*-build
|
||||
fi
|
||||
|
||||
cd /LocalAI/backend/cpp/ik-llama-cpp
|
||||
|
||||
if [ "${TARGETARCH}" = "arm64" ] || [ "${BUILD_TYPE}" = "hipblas" ]; then
|
||||
# ARM64 / ROCm: build without x86 SIMD
|
||||
make ik-llama-cpp-fallback
|
||||
else
|
||||
# ik_llama.cpp's IQK kernels require at least AVX2
|
||||
make ik-llama-cpp-avx2
|
||||
fi
|
||||
EOT
|
||||
|
||||
|
||||
# Copy libraries using a script to handle architecture differences
|
||||
RUN make -BC /LocalAI/backend/cpp/ik-llama-cpp package
|
||||
|
||||
|
||||
FROM scratch
|
||||
|
||||
|
||||
# Copy all available binaries (the build process only creates the appropriate ones for the target architecture)
|
||||
COPY --from=builder /LocalAI/backend/cpp/ik-llama-cpp/package/. ./
|
||||
|
|
@ -58,6 +58,8 @@ ARG CUDA_DOCKER_ARCH
|
|||
ENV CUDA_DOCKER_ARCH=${CUDA_DOCKER_ARCH}
|
||||
ARG CMAKE_ARGS
|
||||
ENV CMAKE_ARGS=${CMAKE_ARGS}
|
||||
ARG AMDGPU_TARGETS
|
||||
ENV AMDGPU_TARGETS=${AMDGPU_TARGETS}
|
||||
ARG BACKEND=rerankers
|
||||
ARG BUILD_TYPE
|
||||
ENV BUILD_TYPE=${BUILD_TYPE}
|
||||
|
|
@ -209,7 +211,11 @@ RUN if [ "${BUILD_TYPE}" = "hipblas" ] && [ "${SKIP_DRIVERS}" = "false" ]; then
|
|||
rm -rf /var/lib/apt/lists/* && \
|
||||
# I have no idea why, but the ROCM lib packages don't trigger ldconfig after they install, which results in local-ai and others not being able
|
||||
# to locate the libraries. We run ldconfig ourselves to work around this packaging deficiency
|
||||
ldconfig \
|
||||
ldconfig && \
|
||||
# Log which GPU architectures have rocBLAS kernel support
|
||||
echo "rocBLAS library data architectures:" && \
|
||||
(ls /opt/rocm*/lib/rocblas/library/Kernels* 2>/dev/null || ls /opt/rocm*/lib64/rocblas/library/Kernels* 2>/dev/null) | grep -oP 'gfx[0-9a-z+-]+' | sort -u || \
|
||||
echo "WARNING: No rocBLAS kernel data found" \
|
||||
; fi
|
||||
|
||||
RUN echo "TARGETARCH: $TARGETARCH"
|
||||
|
|
|
|||
|
|
@ -29,6 +29,7 @@ RUN apt-get update && \
|
|||
curl python3-pip \
|
||||
python-is-python3 \
|
||||
python3-dev llvm \
|
||||
libnuma1 libgomp1 \
|
||||
python3-venv make cmake && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
|
@ -195,6 +196,12 @@ COPY backend/backend.proto /${BACKEND}/backend.proto
|
|||
COPY backend/python/common/ /${BACKEND}/common
|
||||
COPY scripts/build/package-gpu-libs.sh /package-gpu-libs.sh
|
||||
|
||||
# Optional per-backend source build toggle (e.g. vllm on CPU can set
|
||||
# FROM_SOURCE=true to compile against the build host SIMD instead of
|
||||
# pulling a prebuilt wheel). Default empty — most backends ignore it.
|
||||
ARG FROM_SOURCE=""
|
||||
ENV FROM_SOURCE=${FROM_SOURCE}
|
||||
|
||||
RUN cd /${BACKEND} && PORTABLE_PYTHON=true make
|
||||
|
||||
# Package GPU libraries into the backend's lib directory
|
||||
|
|
|
|||
39
backend/Dockerfile.rust
Normal file
39
backend/Dockerfile.rust
Normal file
|
|
@ -0,0 +1,39 @@
|
|||
ARG BASE_IMAGE=ubuntu:24.04
|
||||
|
||||
FROM ${BASE_IMAGE} AS builder
|
||||
ARG BACKEND=kokoros
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
ARG TARGETARCH
|
||||
ARG TARGETVARIANT
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
build-essential \
|
||||
git ccache \
|
||||
ca-certificates \
|
||||
make cmake wget \
|
||||
curl unzip \
|
||||
clang \
|
||||
pkg-config \
|
||||
libssl-dev \
|
||||
espeak-ng libespeak-ng-dev \
|
||||
libsonic-dev libpcaudio-dev \
|
||||
libopus-dev \
|
||||
protobuf-compiler && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install Rust
|
||||
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
|
||||
ENV PATH="/root/.cargo/bin:${PATH}"
|
||||
|
||||
COPY . /LocalAI
|
||||
|
||||
RUN git config --global --add safe.directory /LocalAI
|
||||
|
||||
RUN make -C /LocalAI/backend/rust/${BACKEND} build
|
||||
|
||||
FROM scratch
|
||||
ARG BACKEND=kokoros
|
||||
|
||||
COPY --from=builder /LocalAI/backend/rust/${BACKEND}/package/. ./
|
||||
290
backend/Dockerfile.turboquant
Normal file
290
backend/Dockerfile.turboquant
Normal file
|
|
@ -0,0 +1,290 @@
|
|||
ARG BASE_IMAGE=ubuntu:24.04
|
||||
ARG GRPC_BASE_IMAGE=${BASE_IMAGE}
|
||||
|
||||
|
||||
# The grpc target does one thing, it builds and installs GRPC. This is in it's own layer so that it can be effectively cached by CI.
|
||||
# You probably don't need to change anything here, and if you do, make sure that CI is adjusted so that the cache continues to work.
|
||||
FROM ${GRPC_BASE_IMAGE} AS grpc
|
||||
|
||||
# This is a bit of a hack, but it's required in order to be able to effectively cache this layer in CI
|
||||
ARG GRPC_MAKEFLAGS="-j4 -Otarget"
|
||||
ARG GRPC_VERSION=v1.65.0
|
||||
ARG CMAKE_FROM_SOURCE=false
|
||||
# CUDA Toolkit 13.x compatibility: CMake 3.31.9+ fixes toolchain detection/arch table issues
|
||||
ARG CMAKE_VERSION=3.31.10
|
||||
|
||||
ENV MAKEFLAGS=${GRPC_MAKEFLAGS}
|
||||
|
||||
WORKDIR /build
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
ca-certificates \
|
||||
build-essential curl libssl-dev \
|
||||
git wget && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install CMake (the version in 22.04 is too old)
|
||||
RUN <<EOT bash
|
||||
if [ "${CMAKE_FROM_SOURCE}" = "true" ]; then
|
||||
curl -L -s https://github.com/Kitware/CMake/releases/download/v${CMAKE_VERSION}/cmake-${CMAKE_VERSION}.tar.gz -o cmake.tar.gz && tar xvf cmake.tar.gz && cd cmake-${CMAKE_VERSION} && ./configure && make && make install
|
||||
else
|
||||
apt-get update && \
|
||||
apt-get install -y \
|
||||
cmake && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
fi
|
||||
EOT
|
||||
|
||||
# We install GRPC to a different prefix here so that we can copy in only the build artifacts later
|
||||
# saves several hundred MB on the final docker image size vs copying in the entire GRPC source tree
|
||||
# and running make install in the target container
|
||||
RUN git clone --recurse-submodules --jobs 4 -b ${GRPC_VERSION} --depth 1 --shallow-submodules https://github.com/grpc/grpc && \
|
||||
mkdir -p /build/grpc/cmake/build && \
|
||||
cd /build/grpc/cmake/build && \
|
||||
sed -i "216i\ TESTONLY" "../../third_party/abseil-cpp/absl/container/CMakeLists.txt" && \
|
||||
cmake -DgRPC_INSTALL=ON -DgRPC_BUILD_TESTS=OFF -DCMAKE_INSTALL_PREFIX:PATH=/opt/grpc ../.. && \
|
||||
make && \
|
||||
make install && \
|
||||
rm -rf /build
|
||||
|
||||
FROM ${BASE_IMAGE} AS builder
|
||||
ARG CMAKE_FROM_SOURCE=false
|
||||
ARG CMAKE_VERSION=3.31.10
|
||||
# We can target specific CUDA ARCHITECTURES like --build-arg CUDA_DOCKER_ARCH='75;86;89;120'
|
||||
ARG CUDA_DOCKER_ARCH
|
||||
ENV CUDA_DOCKER_ARCH=${CUDA_DOCKER_ARCH}
|
||||
ARG CMAKE_ARGS
|
||||
ENV CMAKE_ARGS=${CMAKE_ARGS}
|
||||
ARG BACKEND=rerankers
|
||||
ARG BUILD_TYPE
|
||||
ENV BUILD_TYPE=${BUILD_TYPE}
|
||||
ARG CUDA_MAJOR_VERSION
|
||||
ARG CUDA_MINOR_VERSION
|
||||
ARG SKIP_DRIVERS=false
|
||||
ENV CUDA_MAJOR_VERSION=${CUDA_MAJOR_VERSION}
|
||||
ENV CUDA_MINOR_VERSION=${CUDA_MINOR_VERSION}
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
ARG TARGETARCH
|
||||
ARG TARGETVARIANT
|
||||
ARG GO_VERSION=1.25.4
|
||||
ARG UBUNTU_VERSION=2404
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
build-essential \
|
||||
ccache git \
|
||||
ca-certificates \
|
||||
make \
|
||||
pkg-config libcurl4-openssl-dev \
|
||||
curl unzip \
|
||||
libssl-dev wget && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Cuda
|
||||
ENV PATH=/usr/local/cuda/bin:${PATH}
|
||||
|
||||
# HipBLAS requirements
|
||||
ENV PATH=/opt/rocm/bin:${PATH}
|
||||
|
||||
|
||||
# Vulkan requirements
|
||||
RUN <<EOT bash
|
||||
if [ "${BUILD_TYPE}" = "vulkan" ] && [ "${SKIP_DRIVERS}" = "false" ]; then
|
||||
apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
software-properties-common pciutils wget gpg-agent && \
|
||||
apt-get install -y libglm-dev cmake libxcb-dri3-0 libxcb-present0 libpciaccess0 \
|
||||
libpng-dev libxcb-keysyms1-dev libxcb-dri3-dev libx11-dev g++ gcc \
|
||||
libwayland-dev libxrandr-dev libxcb-randr0-dev libxcb-ewmh-dev \
|
||||
git python-is-python3 bison libx11-xcb-dev liblz4-dev libzstd-dev \
|
||||
ocaml-core ninja-build pkg-config libxml2-dev wayland-protocols python3-jsonschema \
|
||||
clang-format qtbase5-dev qt6-base-dev libxcb-glx0-dev sudo xz-utils
|
||||
if [ "amd64" = "$TARGETARCH" ]; then
|
||||
wget "https://sdk.lunarg.com/sdk/download/1.4.335.0/linux/vulkansdk-linux-x86_64-1.4.335.0.tar.xz" && \
|
||||
tar -xf vulkansdk-linux-x86_64-1.4.335.0.tar.xz && \
|
||||
rm vulkansdk-linux-x86_64-1.4.335.0.tar.xz && \
|
||||
mkdir -p /opt/vulkan-sdk && \
|
||||
mv 1.4.335.0 /opt/vulkan-sdk/ && \
|
||||
cd /opt/vulkan-sdk/1.4.335.0 && \
|
||||
./vulkansdk --no-deps --maxjobs \
|
||||
vulkan-loader \
|
||||
vulkan-validationlayers \
|
||||
vulkan-extensionlayer \
|
||||
vulkan-tools \
|
||||
shaderc && \
|
||||
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/bin/* /usr/bin/ && \
|
||||
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/lib/* /usr/lib/x86_64-linux-gnu/ && \
|
||||
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/include/* /usr/include/ && \
|
||||
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/share/* /usr/share/ && \
|
||||
rm -rf /opt/vulkan-sdk
|
||||
fi
|
||||
if [ "arm64" = "$TARGETARCH" ]; then
|
||||
mkdir vulkan && cd vulkan && \
|
||||
curl -L -o vulkan-sdk.tar.xz https://github.com/mudler/vulkan-sdk-arm/releases/download/1.4.335.0/vulkansdk-ubuntu-24.04-arm-1.4.335.0.tar.xz && \
|
||||
tar -xvf vulkan-sdk.tar.xz && \
|
||||
rm vulkan-sdk.tar.xz && \
|
||||
cd 1.4.335.0 && \
|
||||
cp -rfv aarch64/bin/* /usr/bin/ && \
|
||||
cp -rfv aarch64/lib/* /usr/lib/aarch64-linux-gnu/ && \
|
||||
cp -rfv aarch64/include/* /usr/include/ && \
|
||||
cp -rfv aarch64/share/* /usr/share/ && \
|
||||
cd ../.. && \
|
||||
rm -rf vulkan
|
||||
fi
|
||||
ldconfig && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
fi
|
||||
EOT
|
||||
|
||||
# CuBLAS requirements
|
||||
RUN <<EOT bash
|
||||
if ( [ "${BUILD_TYPE}" = "cublas" ] || [ "${BUILD_TYPE}" = "l4t" ] ) && [ "${SKIP_DRIVERS}" = "false" ]; then
|
||||
apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
software-properties-common pciutils
|
||||
if [ "amd64" = "$TARGETARCH" ]; then
|
||||
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${UBUNTU_VERSION}/x86_64/cuda-keyring_1.1-1_all.deb
|
||||
fi
|
||||
if [ "arm64" = "$TARGETARCH" ]; then
|
||||
if [ "${CUDA_MAJOR_VERSION}" = "13" ]; then
|
||||
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${UBUNTU_VERSION}/sbsa/cuda-keyring_1.1-1_all.deb
|
||||
else
|
||||
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${UBUNTU_VERSION}/arm64/cuda-keyring_1.1-1_all.deb
|
||||
fi
|
||||
fi
|
||||
dpkg -i cuda-keyring_1.1-1_all.deb && \
|
||||
rm -f cuda-keyring_1.1-1_all.deb && \
|
||||
apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
cuda-nvcc-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
|
||||
libcufft-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
|
||||
libcurand-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
|
||||
libcublas-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
|
||||
libcusparse-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
|
||||
libcusolver-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION}
|
||||
if [ "${CUDA_MAJOR_VERSION}" = "13" ] && [ "arm64" = "$TARGETARCH" ]; then
|
||||
apt-get install -y --no-install-recommends \
|
||||
libcufile-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} libcudnn9-cuda-${CUDA_MAJOR_VERSION} cuda-cupti-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} libnvjitlink-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION}
|
||||
fi
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
fi
|
||||
EOT
|
||||
|
||||
|
||||
# https://github.com/NVIDIA/Isaac-GR00T/issues/343
|
||||
RUN <<EOT bash
|
||||
if [ "${BUILD_TYPE}" = "cublas" ] && [ "${TARGETARCH}" = "arm64" ]; then
|
||||
wget https://developer.download.nvidia.com/compute/cudss/0.6.0/local_installers/cudss-local-tegra-repo-ubuntu${UBUNTU_VERSION}-0.6.0_0.6.0-1_arm64.deb && \
|
||||
dpkg -i cudss-local-tegra-repo-ubuntu${UBUNTU_VERSION}-0.6.0_0.6.0-1_arm64.deb && \
|
||||
cp /var/cudss-local-tegra-repo-ubuntu${UBUNTU_VERSION}-0.6.0/cudss-*-keyring.gpg /usr/share/keyrings/ && \
|
||||
apt-get update && apt-get -y install cudss cudss-cuda-${CUDA_MAJOR_VERSION} && \
|
||||
wget https://developer.download.nvidia.com/compute/nvpl/25.5/local_installers/nvpl-local-repo-ubuntu${UBUNTU_VERSION}-25.5_1.0-1_arm64.deb && \
|
||||
dpkg -i nvpl-local-repo-ubuntu${UBUNTU_VERSION}-25.5_1.0-1_arm64.deb && \
|
||||
cp /var/nvpl-local-repo-ubuntu${UBUNTU_VERSION}-25.5/nvpl-*-keyring.gpg /usr/share/keyrings/ && \
|
||||
apt-get update && apt-get install -y nvpl
|
||||
fi
|
||||
EOT
|
||||
|
||||
# If we are building with clblas support, we need the libraries for the builds
|
||||
RUN if [ "${BUILD_TYPE}" = "clblas" ] && [ "${SKIP_DRIVERS}" = "false" ]; then \
|
||||
apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
libclblast-dev && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/* \
|
||||
; fi
|
||||
|
||||
RUN if [ "${BUILD_TYPE}" = "hipblas" ] && [ "${SKIP_DRIVERS}" = "false" ]; then \
|
||||
apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
hipblas-dev \
|
||||
rocblas-dev && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/* && \
|
||||
# I have no idea why, but the ROCM lib packages don't trigger ldconfig after they install, which results in local-ai and others not being able
|
||||
# to locate the libraries. We run ldconfig ourselves to work around this packaging deficiency
|
||||
ldconfig && \
|
||||
# Log which GPU architectures have rocBLAS kernel support
|
||||
echo "rocBLAS library data architectures:" && \
|
||||
(ls /opt/rocm*/lib/rocblas/library/Kernels* 2>/dev/null || ls /opt/rocm*/lib64/rocblas/library/Kernels* 2>/dev/null) | grep -oP 'gfx[0-9a-z+-]+' | sort -u || \
|
||||
echo "WARNING: No rocBLAS kernel data found" \
|
||||
; fi
|
||||
|
||||
RUN echo "TARGETARCH: $TARGETARCH"
|
||||
|
||||
# We need protoc installed, and the version in 22.04 is too old. We will create one as part installing the GRPC build below
|
||||
# but that will also being in a newer version of absl which stablediffusion cannot compile with. This version of protoc is only
|
||||
# here so that we can generate the grpc code for the stablediffusion build
|
||||
RUN <<EOT bash
|
||||
if [ "amd64" = "$TARGETARCH" ]; then
|
||||
curl -L -s https://github.com/protocolbuffers/protobuf/releases/download/v27.1/protoc-27.1-linux-x86_64.zip -o protoc.zip && \
|
||||
unzip -j -d /usr/local/bin protoc.zip bin/protoc && \
|
||||
rm protoc.zip
|
||||
fi
|
||||
if [ "arm64" = "$TARGETARCH" ]; then
|
||||
curl -L -s https://github.com/protocolbuffers/protobuf/releases/download/v27.1/protoc-27.1-linux-aarch_64.zip -o protoc.zip && \
|
||||
unzip -j -d /usr/local/bin protoc.zip bin/protoc && \
|
||||
rm protoc.zip
|
||||
fi
|
||||
EOT
|
||||
|
||||
# Install CMake (the version in 22.04 is too old)
|
||||
RUN <<EOT bash
|
||||
if [ "${CMAKE_FROM_SOURCE}" = "true" ]; then
|
||||
curl -L -s https://github.com/Kitware/CMake/releases/download/v${CMAKE_VERSION}/cmake-${CMAKE_VERSION}.tar.gz -o cmake.tar.gz && tar xvf cmake.tar.gz && cd cmake-${CMAKE_VERSION} && ./configure && make && make install
|
||||
else
|
||||
apt-get update && \
|
||||
apt-get install -y \
|
||||
cmake && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
fi
|
||||
EOT
|
||||
|
||||
COPY --from=grpc /opt/grpc /usr/local
|
||||
|
||||
|
||||
COPY . /LocalAI
|
||||
|
||||
RUN <<'EOT' bash
|
||||
set -euxo pipefail
|
||||
|
||||
if [[ -n "${CUDA_DOCKER_ARCH:-}" ]]; then
|
||||
CUDA_ARCH_ESC="${CUDA_DOCKER_ARCH//;/\\;}"
|
||||
export CMAKE_ARGS="${CMAKE_ARGS:-} -DCMAKE_CUDA_ARCHITECTURES=${CUDA_ARCH_ESC}"
|
||||
echo "CMAKE_ARGS(env) = ${CMAKE_ARGS}"
|
||||
rm -rf /LocalAI/backend/cpp/turboquant-*-build
|
||||
fi
|
||||
|
||||
cd /LocalAI/backend/cpp/turboquant
|
||||
|
||||
if [ "${TARGETARCH}" = "arm64" ] || [ "${BUILD_TYPE}" = "hipblas" ]; then
|
||||
make turboquant-fallback
|
||||
make turboquant-grpc
|
||||
make turboquant-rpc-server
|
||||
else
|
||||
make turboquant-avx
|
||||
make turboquant-avx2
|
||||
make turboquant-avx512
|
||||
make turboquant-fallback
|
||||
make turboquant-grpc
|
||||
make turboquant-rpc-server
|
||||
fi
|
||||
EOT
|
||||
|
||||
|
||||
# Copy libraries using a script to handle architecture differences
|
||||
RUN make -BC /LocalAI/backend/cpp/turboquant package
|
||||
|
||||
|
||||
FROM scratch
|
||||
|
||||
|
||||
# Copy all available binaries (the build process only creates the appropriate ones for the target architecture)
|
||||
COPY --from=builder /LocalAI/backend/cpp/turboquant/package/. ./
|
||||
|
|
@ -17,6 +17,7 @@ service Backend {
|
|||
rpc GenerateImage(GenerateImageRequest) returns (Result) {}
|
||||
rpc GenerateVideo(GenerateVideoRequest) returns (Result) {}
|
||||
rpc AudioTranscription(TranscriptRequest) returns (TranscriptResult) {}
|
||||
rpc AudioTranscriptionStream(TranscriptRequest) returns (stream TranscriptStreamResponse) {}
|
||||
rpc TTS(TTSRequest) returns (Result) {}
|
||||
rpc TTSStream(TTSRequest) returns (stream Reply) {}
|
||||
rpc SoundGeneration(SoundGenerationRequest) returns (Result) {}
|
||||
|
|
@ -322,11 +323,21 @@ message TranscriptRequest {
|
|||
bool translate = 5;
|
||||
bool diarize = 6;
|
||||
string prompt = 7;
|
||||
float temperature = 8;
|
||||
repeated string timestamp_granularities = 9;
|
||||
bool stream = 10;
|
||||
}
|
||||
|
||||
message TranscriptResult {
|
||||
repeated TranscriptSegment segments = 1;
|
||||
string text = 2;
|
||||
string language = 3;
|
||||
float duration = 4;
|
||||
}
|
||||
|
||||
message TranscriptStreamResponse {
|
||||
string delta = 1;
|
||||
TranscriptResult final_result = 2;
|
||||
}
|
||||
|
||||
message TranscriptSegment {
|
||||
|
|
@ -444,6 +455,10 @@ message Message {
|
|||
|
||||
message DetectOptions {
|
||||
string src = 1;
|
||||
string prompt = 2; // Text prompt (for SAM 3 PCS mode)
|
||||
repeated float points = 3; // Point coordinates as [x1, y1, label1, x2, y2, label2, ...] (label: 1=pos, 0=neg)
|
||||
repeated float boxes = 4; // Box coordinates as [x1, y1, x2, y2, ...]
|
||||
float threshold = 5; // Detection confidence threshold
|
||||
}
|
||||
|
||||
message Detection {
|
||||
|
|
@ -453,6 +468,7 @@ message Detection {
|
|||
float height = 4;
|
||||
float confidence = 5;
|
||||
string class_name = 6;
|
||||
bytes mask = 7; // PNG-encoded binary segmentation mask
|
||||
}
|
||||
|
||||
message DetectResponse {
|
||||
|
|
@ -541,6 +557,7 @@ message ModelMetadataResponse {
|
|||
bool supports_thinking = 1;
|
||||
string rendered_template = 2; // The rendered chat template with enable_thinking=true (empty if not applicable)
|
||||
ToolFormatMarkers tool_format = 3; // Auto-detected tool format markers from differential template analysis
|
||||
string media_marker = 4; // Marker the backend expects in the prompt for each multimodal input (images/audio/video). Empty when the backend does not use a marker.
|
||||
}
|
||||
|
||||
// Fine-tuning messages
|
||||
|
|
|
|||
78
backend/cpp/ik-llama-cpp/CMakeLists.txt
Normal file
78
backend/cpp/ik-llama-cpp/CMakeLists.txt
Normal file
|
|
@ -0,0 +1,78 @@
|
|||
## Clip/LLaVA library for multimodal support — built locally from copied sources
|
||||
set(TARGET myclip)
|
||||
add_library(${TARGET} clip.cpp clip.h llava.cpp llava.h)
|
||||
install(TARGETS ${TARGET} LIBRARY)
|
||||
target_include_directories(myclip PUBLIC .)
|
||||
target_include_directories(myclip PUBLIC ../..)
|
||||
target_include_directories(myclip PUBLIC ../../common)
|
||||
target_link_libraries(${TARGET} PRIVATE common ggml llama ${CMAKE_THREAD_LIBS_INIT})
|
||||
target_compile_features(${TARGET} PRIVATE cxx_std_11)
|
||||
if (NOT MSVC)
|
||||
target_compile_options(${TARGET} PRIVATE -Wno-cast-qual)
|
||||
endif()
|
||||
|
||||
set(TARGET grpc-server)
|
||||
set(CMAKE_CXX_STANDARD 17)
|
||||
cmake_minimum_required(VERSION 3.15)
|
||||
set(TARGET grpc-server)
|
||||
set(_PROTOBUF_LIBPROTOBUF libprotobuf)
|
||||
set(_REFLECTION grpc++_reflection)
|
||||
|
||||
if (${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
|
||||
if (CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "arm64")
|
||||
set(HOMEBREW_DEFAULT_PREFIX "/opt/homebrew")
|
||||
else()
|
||||
set(HOMEBREW_DEFAULT_PREFIX "/usr/local")
|
||||
endif()
|
||||
link_directories("${HOMEBREW_DEFAULT_PREFIX}/lib")
|
||||
include_directories("${HOMEBREW_DEFAULT_PREFIX}/include")
|
||||
endif()
|
||||
|
||||
find_package(absl CONFIG REQUIRED)
|
||||
find_package(Protobuf CONFIG REQUIRED)
|
||||
find_package(gRPC CONFIG REQUIRED)
|
||||
|
||||
find_program(_PROTOBUF_PROTOC protoc)
|
||||
set(_GRPC_GRPCPP grpc++)
|
||||
find_program(_GRPC_CPP_PLUGIN_EXECUTABLE grpc_cpp_plugin)
|
||||
|
||||
include_directories(${CMAKE_CURRENT_BINARY_DIR})
|
||||
include_directories(${Protobuf_INCLUDE_DIRS})
|
||||
|
||||
message(STATUS "Using protobuf version ${Protobuf_VERSION} | Protobuf_INCLUDE_DIRS: ${Protobuf_INCLUDE_DIRS} | CMAKE_CURRENT_BINARY_DIR: ${CMAKE_CURRENT_BINARY_DIR}")
|
||||
|
||||
# Proto file
|
||||
get_filename_component(hw_proto "../../../../../../backend/backend.proto" ABSOLUTE)
|
||||
get_filename_component(hw_proto_path "${hw_proto}" PATH)
|
||||
|
||||
set(hw_proto_srcs "${CMAKE_CURRENT_BINARY_DIR}/backend.pb.cc")
|
||||
set(hw_proto_hdrs "${CMAKE_CURRENT_BINARY_DIR}/backend.pb.h")
|
||||
set(hw_grpc_srcs "${CMAKE_CURRENT_BINARY_DIR}/backend.grpc.pb.cc")
|
||||
set(hw_grpc_hdrs "${CMAKE_CURRENT_BINARY_DIR}/backend.grpc.pb.h")
|
||||
|
||||
add_custom_command(
|
||||
OUTPUT "${hw_proto_srcs}" "${hw_proto_hdrs}" "${hw_grpc_srcs}" "${hw_grpc_hdrs}"
|
||||
COMMAND ${_PROTOBUF_PROTOC}
|
||||
ARGS --grpc_out "${CMAKE_CURRENT_BINARY_DIR}"
|
||||
--cpp_out "${CMAKE_CURRENT_BINARY_DIR}"
|
||||
-I "${hw_proto_path}"
|
||||
--plugin=protoc-gen-grpc="${_GRPC_CPP_PLUGIN_EXECUTABLE}"
|
||||
"${hw_proto}"
|
||||
DEPENDS "${hw_proto}")
|
||||
|
||||
add_library(hw_grpc_proto
|
||||
${hw_grpc_srcs}
|
||||
${hw_grpc_hdrs}
|
||||
${hw_proto_srcs}
|
||||
${hw_proto_hdrs} )
|
||||
|
||||
add_executable(${TARGET} grpc-server.cpp json.hpp)
|
||||
target_link_libraries(${TARGET} PRIVATE common llama myclip ${CMAKE_THREAD_LIBS_INIT} absl::flags hw_grpc_proto
|
||||
absl::flags_parse
|
||||
gRPC::${_REFLECTION}
|
||||
gRPC::${_GRPC_GRPCPP}
|
||||
protobuf::${_PROTOBUF_LIBPROTOBUF})
|
||||
target_compile_features(${TARGET} PRIVATE cxx_std_11)
|
||||
if(TARGET BUILD_INFO)
|
||||
add_dependencies(${TARGET} BUILD_INFO)
|
||||
endif()
|
||||
167
backend/cpp/ik-llama-cpp/Makefile
Normal file
167
backend/cpp/ik-llama-cpp/Makefile
Normal file
|
|
@ -0,0 +1,167 @@
|
|||
|
||||
IK_LLAMA_VERSION?=d4824131580b94ffa7b0e91c955e2b237c2fe16e
|
||||
LLAMA_REPO?=https://github.com/ikawrakow/ik_llama.cpp
|
||||
|
||||
CMAKE_ARGS?=
|
||||
BUILD_TYPE?=
|
||||
NATIVE?=false
|
||||
ONEAPI_VARS?=/opt/intel/oneapi/setvars.sh
|
||||
TARGET?=--target grpc-server
|
||||
JOBS?=$(shell nproc 2>/dev/null || sysctl -n hw.ncpu 2>/dev/null || echo 1)
|
||||
ARCH?=$(shell uname -m)
|
||||
|
||||
# Disable Shared libs as we are linking on static gRPC and we can't mix shared and static
|
||||
CMAKE_ARGS+=-DBUILD_SHARED_LIBS=OFF -DLLAMA_CURL=OFF
|
||||
|
||||
CURRENT_MAKEFILE_DIR := $(dir $(abspath $(lastword $(MAKEFILE_LIST))))
|
||||
ifeq ($(NATIVE),false)
|
||||
CMAKE_ARGS+=-DGGML_NATIVE=OFF -DLLAMA_OPENSSL=OFF
|
||||
endif
|
||||
# If build type is cublas, then we set -DGGML_CUDA=ON to CMAKE_ARGS automatically
|
||||
ifeq ($(BUILD_TYPE),cublas)
|
||||
CMAKE_ARGS+=-DGGML_CUDA=ON
|
||||
# If build type is openblas then we set -DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS
|
||||
# to CMAKE_ARGS automatically
|
||||
else ifeq ($(BUILD_TYPE),openblas)
|
||||
CMAKE_ARGS+=-DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS
|
||||
# If build type is clblas (openCL) we set -DGGML_CLBLAST=ON -DCLBlast_DIR=/some/path
|
||||
else ifeq ($(BUILD_TYPE),clblas)
|
||||
CMAKE_ARGS+=-DGGML_CLBLAST=ON -DCLBlast_DIR=/some/path
|
||||
# If it's hipblas we do have also to set CC=/opt/rocm/llvm/bin/clang CXX=/opt/rocm/llvm/bin/clang++
|
||||
else ifeq ($(BUILD_TYPE),hipblas)
|
||||
ROCM_HOME ?= /opt/rocm
|
||||
ROCM_PATH ?= /opt/rocm
|
||||
export CXX=$(ROCM_HOME)/llvm/bin/clang++
|
||||
export CC=$(ROCM_HOME)/llvm/bin/clang
|
||||
AMDGPU_TARGETS?=gfx803,gfx900,gfx906,gfx908,gfx90a,gfx942,gfx1010,gfx1030,gfx1032,gfx1100,gfx1101,gfx1102,gfx1200,gfx1201
|
||||
CMAKE_ARGS+=-DGGML_HIP=ON -DAMDGPU_TARGETS=$(AMDGPU_TARGETS)
|
||||
else ifeq ($(BUILD_TYPE),vulkan)
|
||||
CMAKE_ARGS+=-DGGML_VULKAN=1
|
||||
else ifeq ($(OS),Darwin)
|
||||
ifeq ($(BUILD_TYPE),)
|
||||
BUILD_TYPE=metal
|
||||
endif
|
||||
ifneq ($(BUILD_TYPE),metal)
|
||||
CMAKE_ARGS+=-DGGML_METAL=OFF
|
||||
else
|
||||
CMAKE_ARGS+=-DGGML_METAL=ON
|
||||
CMAKE_ARGS+=-DGGML_METAL_EMBED_LIBRARY=ON
|
||||
CMAKE_ARGS+=-DGGML_METAL_USE_BF16=ON
|
||||
CMAKE_ARGS+=-DGGML_OPENMP=OFF
|
||||
endif
|
||||
TARGET+=--target ggml-metal
|
||||
endif
|
||||
|
||||
ifeq ($(BUILD_TYPE),sycl_f16)
|
||||
CMAKE_ARGS+=-DGGML_SYCL=ON \
|
||||
-DCMAKE_C_COMPILER=icx \
|
||||
-DCMAKE_CXX_COMPILER=icpx \
|
||||
-DCMAKE_CXX_FLAGS="-fsycl" \
|
||||
-DGGML_SYCL_F16=ON
|
||||
endif
|
||||
|
||||
ifeq ($(BUILD_TYPE),sycl_f32)
|
||||
CMAKE_ARGS+=-DGGML_SYCL=ON \
|
||||
-DCMAKE_C_COMPILER=icx \
|
||||
-DCMAKE_CXX_COMPILER=icpx \
|
||||
-DCMAKE_CXX_FLAGS="-fsycl"
|
||||
endif
|
||||
|
||||
INSTALLED_PACKAGES=$(CURDIR)/../grpc/installed_packages
|
||||
INSTALLED_LIB_CMAKE=$(INSTALLED_PACKAGES)/lib/cmake
|
||||
ADDED_CMAKE_ARGS=-Dabsl_DIR=${INSTALLED_LIB_CMAKE}/absl \
|
||||
-DProtobuf_DIR=${INSTALLED_LIB_CMAKE}/protobuf \
|
||||
-Dutf8_range_DIR=${INSTALLED_LIB_CMAKE}/utf8_range \
|
||||
-DgRPC_DIR=${INSTALLED_LIB_CMAKE}/grpc \
|
||||
-DCMAKE_CXX_STANDARD_INCLUDE_DIRECTORIES=${INSTALLED_PACKAGES}/include
|
||||
build-ik-llama-cpp-grpc-server:
|
||||
# Conditionally build grpc for the backend to use if needed
|
||||
ifdef BUILD_GRPC_FOR_BACKEND_LLAMA
|
||||
$(MAKE) -C ../../grpc build
|
||||
_PROTOBUF_PROTOC=${INSTALLED_PACKAGES}/bin/proto \
|
||||
_GRPC_CPP_PLUGIN_EXECUTABLE=${INSTALLED_PACKAGES}/bin/grpc_cpp_plugin \
|
||||
PATH="${INSTALLED_PACKAGES}/bin:${PATH}" \
|
||||
CMAKE_ARGS="${CMAKE_ARGS} ${ADDED_CMAKE_ARGS}" \
|
||||
IK_LLAMA_VERSION=$(IK_LLAMA_VERSION) \
|
||||
$(MAKE) -C $(CURRENT_MAKEFILE_DIR)/../$(VARIANT) grpc-server
|
||||
else
|
||||
echo "BUILD_GRPC_FOR_BACKEND_LLAMA is not defined."
|
||||
IK_LLAMA_VERSION=$(IK_LLAMA_VERSION) $(MAKE) -C $(CURRENT_MAKEFILE_DIR)/../$(VARIANT) grpc-server
|
||||
endif
|
||||
|
||||
ik-llama-cpp-avx2: llama.cpp
|
||||
cp -rf $(CURRENT_MAKEFILE_DIR)/../ik-llama-cpp $(CURRENT_MAKEFILE_DIR)/../ik-llama-cpp-avx2-build
|
||||
$(MAKE) -C $(CURRENT_MAKEFILE_DIR)/../ik-llama-cpp-avx2-build purge
|
||||
$(info ${GREEN}I ik-llama-cpp build info:avx2${RESET})
|
||||
CMAKE_ARGS="$(CMAKE_ARGS) -DGGML_AVX=on -DGGML_AVX2=on -DGGML_AVX512=off -DGGML_FMA=on -DGGML_F16C=on" $(MAKE) VARIANT="ik-llama-cpp-avx2-build" build-ik-llama-cpp-grpc-server
|
||||
cp -rfv $(CURRENT_MAKEFILE_DIR)/../ik-llama-cpp-avx2-build/grpc-server ik-llama-cpp-avx2
|
||||
|
||||
ik-llama-cpp-avx512: llama.cpp
|
||||
cp -rf $(CURRENT_MAKEFILE_DIR)/../ik-llama-cpp $(CURRENT_MAKEFILE_DIR)/../ik-llama-cpp-avx512-build
|
||||
$(MAKE) -C $(CURRENT_MAKEFILE_DIR)/../ik-llama-cpp-avx512-build purge
|
||||
$(info ${GREEN}I ik-llama-cpp build info:avx512${RESET})
|
||||
CMAKE_ARGS="$(CMAKE_ARGS) -DGGML_AVX=on -DGGML_AVX2=off -DGGML_AVX512=on -DGGML_FMA=on -DGGML_F16C=on" $(MAKE) VARIANT="ik-llama-cpp-avx512-build" build-ik-llama-cpp-grpc-server
|
||||
cp -rfv $(CURRENT_MAKEFILE_DIR)/../ik-llama-cpp-avx512-build/grpc-server ik-llama-cpp-avx512
|
||||
|
||||
ik-llama-cpp-avx: llama.cpp
|
||||
cp -rf $(CURRENT_MAKEFILE_DIR)/../ik-llama-cpp $(CURRENT_MAKEFILE_DIR)/../ik-llama-cpp-avx-build
|
||||
$(MAKE) -C $(CURRENT_MAKEFILE_DIR)/../ik-llama-cpp-avx-build purge
|
||||
$(info ${GREEN}I ik-llama-cpp build info:avx${RESET})
|
||||
CMAKE_ARGS="$(CMAKE_ARGS) -DGGML_AVX=on -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off -DGGML_BMI2=off" $(MAKE) VARIANT="ik-llama-cpp-avx-build" build-ik-llama-cpp-grpc-server
|
||||
cp -rfv $(CURRENT_MAKEFILE_DIR)/../ik-llama-cpp-avx-build/grpc-server ik-llama-cpp-avx
|
||||
|
||||
ik-llama-cpp-fallback: llama.cpp
|
||||
cp -rf $(CURRENT_MAKEFILE_DIR)/../ik-llama-cpp $(CURRENT_MAKEFILE_DIR)/../ik-llama-cpp-fallback-build
|
||||
$(MAKE) -C $(CURRENT_MAKEFILE_DIR)/../ik-llama-cpp-fallback-build purge
|
||||
$(info ${GREEN}I ik-llama-cpp build info:fallback${RESET})
|
||||
CMAKE_ARGS="$(CMAKE_ARGS) -DGGML_AVX=off -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off -DGGML_BMI2=off" $(MAKE) VARIANT="ik-llama-cpp-fallback-build" build-ik-llama-cpp-grpc-server
|
||||
cp -rfv $(CURRENT_MAKEFILE_DIR)/../ik-llama-cpp-fallback-build/grpc-server ik-llama-cpp-fallback
|
||||
|
||||
ik-llama-cpp-grpc: llama.cpp
|
||||
cp -rf $(CURRENT_MAKEFILE_DIR)/../ik-llama-cpp $(CURRENT_MAKEFILE_DIR)/../ik-llama-cpp-grpc-build
|
||||
$(MAKE) -C $(CURRENT_MAKEFILE_DIR)/../ik-llama-cpp-grpc-build purge
|
||||
$(info ${GREEN}I ik-llama-cpp build info:grpc${RESET})
|
||||
CMAKE_ARGS="$(CMAKE_ARGS) -DGGML_RPC=ON -DGGML_AVX=off -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off -DGGML_BMI2=off" TARGET="--target grpc-server --target rpc-server" $(MAKE) VARIANT="ik-llama-cpp-grpc-build" build-ik-llama-cpp-grpc-server
|
||||
cp -rfv $(CURRENT_MAKEFILE_DIR)/../ik-llama-cpp-grpc-build/grpc-server ik-llama-cpp-grpc
|
||||
|
||||
ik-llama-cpp-rpc-server: ik-llama-cpp-grpc
|
||||
cp -rf $(CURRENT_MAKEFILE_DIR)/../ik-llama-cpp-grpc-build/llama.cpp/build/bin/rpc-server ik-llama-cpp-rpc-server
|
||||
|
||||
llama.cpp:
|
||||
mkdir -p llama.cpp
|
||||
cd llama.cpp && \
|
||||
git init && \
|
||||
git remote add origin $(LLAMA_REPO) && \
|
||||
git fetch origin && \
|
||||
git checkout -b build $(IK_LLAMA_VERSION) && \
|
||||
git submodule update --init --recursive --depth 1 --single-branch
|
||||
|
||||
llama.cpp/examples/grpc-server: llama.cpp
|
||||
mkdir -p llama.cpp/examples/grpc-server
|
||||
bash prepare.sh
|
||||
|
||||
rebuild:
|
||||
bash prepare.sh
|
||||
rm -rf grpc-server
|
||||
$(MAKE) grpc-server
|
||||
|
||||
package:
|
||||
bash package.sh
|
||||
|
||||
purge:
|
||||
rm -rf llama.cpp/build
|
||||
rm -rf llama.cpp/examples/grpc-server
|
||||
rm -rf grpc-server
|
||||
|
||||
clean: purge
|
||||
rm -rf llama.cpp
|
||||
|
||||
grpc-server: llama.cpp llama.cpp/examples/grpc-server
|
||||
@echo "Building grpc-server with $(BUILD_TYPE) build type and $(CMAKE_ARGS)"
|
||||
ifneq (,$(findstring sycl,$(BUILD_TYPE)))
|
||||
+bash -c "source $(ONEAPI_VARS); \
|
||||
cd llama.cpp && mkdir -p build && cd build && cmake .. $(CMAKE_ARGS) && cmake --build . --config Release -j $(JOBS) $(TARGET)"
|
||||
else
|
||||
+cd llama.cpp && mkdir -p build && cd build && cmake .. $(CMAKE_ARGS) && cmake --build . --config Release -j $(JOBS) $(TARGET)
|
||||
endif
|
||||
cp llama.cpp/build/bin/grpc-server .
|
||||
2652
backend/cpp/ik-llama-cpp/grpc-server.cpp
Normal file
2652
backend/cpp/ik-llama-cpp/grpc-server.cpp
Normal file
File diff suppressed because it is too large
Load diff
58
backend/cpp/ik-llama-cpp/package.sh
Normal file
58
backend/cpp/ik-llama-cpp/package.sh
Normal file
|
|
@ -0,0 +1,58 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Script to copy the appropriate libraries based on architecture
|
||||
# This script is used in the final stage of the Dockerfile
|
||||
|
||||
set -e
|
||||
|
||||
CURDIR=$(dirname "$(realpath $0)")
|
||||
REPO_ROOT="${CURDIR}/../../.."
|
||||
|
||||
# Create lib directory
|
||||
mkdir -p $CURDIR/package/lib
|
||||
|
||||
cp -avrf $CURDIR/ik-llama-cpp-* $CURDIR/package/
|
||||
cp -rfv $CURDIR/run.sh $CURDIR/package/
|
||||
|
||||
# Detect architecture and copy appropriate libraries
|
||||
if [ -f "/lib64/ld-linux-x86-64.so.2" ]; then
|
||||
# x86_64 architecture
|
||||
echo "Detected x86_64 architecture, copying x86_64 libraries..."
|
||||
cp -arfLv /lib64/ld-linux-x86-64.so.2 $CURDIR/package/lib/ld.so
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libc.so.6 $CURDIR/package/lib/libc.so.6
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libgcc_s.so.1 $CURDIR/package/lib/libgcc_s.so.1
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libstdc++.so.6 $CURDIR/package/lib/libstdc++.so.6
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libm.so.6 $CURDIR/package/lib/libm.so.6
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libgomp.so.1 $CURDIR/package/lib/libgomp.so.1
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libdl.so.2 $CURDIR/package/lib/libdl.so.2
|
||||
cp -arfLv /lib/x86_64-linux-gnu/librt.so.1 $CURDIR/package/lib/librt.so.1
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libpthread.so.0 $CURDIR/package/lib/libpthread.so.0
|
||||
elif [ -f "/lib/ld-linux-aarch64.so.1" ]; then
|
||||
# ARM64 architecture
|
||||
echo "Detected ARM64 architecture, copying ARM64 libraries..."
|
||||
cp -arfLv /lib/ld-linux-aarch64.so.1 $CURDIR/package/lib/ld.so
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libc.so.6 $CURDIR/package/lib/libc.so.6
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libgcc_s.so.1 $CURDIR/package/lib/libgcc_s.so.1
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libstdc++.so.6 $CURDIR/package/lib/libstdc++.so.6
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libm.so.6 $CURDIR/package/lib/libm.so.6
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libgomp.so.1 $CURDIR/package/lib/libgomp.so.1
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libdl.so.2 $CURDIR/package/lib/libdl.so.2
|
||||
cp -arfLv /lib/aarch64-linux-gnu/librt.so.1 $CURDIR/package/lib/librt.so.1
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libpthread.so.0 $CURDIR/package/lib/libpthread.so.0
|
||||
else
|
||||
echo "Error: Could not detect architecture"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Package GPU libraries based on BUILD_TYPE
|
||||
# The GPU library packaging script will detect BUILD_TYPE and copy appropriate GPU libraries
|
||||
GPU_LIB_SCRIPT="${REPO_ROOT}/scripts/build/package-gpu-libs.sh"
|
||||
if [ -f "$GPU_LIB_SCRIPT" ]; then
|
||||
echo "Packaging GPU libraries for BUILD_TYPE=${BUILD_TYPE:-cpu}..."
|
||||
source "$GPU_LIB_SCRIPT" "$CURDIR/package/lib"
|
||||
package_gpu_libs
|
||||
fi
|
||||
|
||||
echo "Packaging completed successfully"
|
||||
ls -liah $CURDIR/package/
|
||||
ls -liah $CURDIR/package/lib/
|
||||
|
|
@ -0,0 +1,10 @@
|
|||
--- a/ggml/src/iqk/iqk_common.h
|
||||
+++ b/ggml/src/iqk/iqk_common.h
|
||||
@@ -9,6 +9,7 @@
|
||||
#pragma once
|
||||
|
||||
#include "iqk_config.h"
|
||||
+#include <cstdint>
|
||||
|
||||
#if defined IQK_IMPLEMENT
|
||||
|
||||
49
backend/cpp/ik-llama-cpp/prepare.sh
Normal file
49
backend/cpp/ik-llama-cpp/prepare.sh
Normal file
|
|
@ -0,0 +1,49 @@
|
|||
#!/bin/bash
|
||||
|
||||
## Patches
|
||||
|
||||
## Apply patches from the `patches` directory
|
||||
if [ -d "patches" ]; then
|
||||
for patch in $(ls patches); do
|
||||
echo "Applying patch $patch"
|
||||
patch -d llama.cpp/ -p1 < patches/$patch
|
||||
done
|
||||
fi
|
||||
|
||||
set -e
|
||||
|
||||
cp -r CMakeLists.txt llama.cpp/examples/grpc-server/
|
||||
cp -r grpc-server.cpp llama.cpp/examples/grpc-server/
|
||||
cp -r utils.hpp llama.cpp/examples/grpc-server/
|
||||
cp -rfv llama.cpp/vendor/nlohmann/json.hpp llama.cpp/examples/grpc-server/
|
||||
|
||||
## Copy clip/llava files for multimodal support (built as myclip library)
|
||||
cp -rfv llama.cpp/examples/llava/clip.h llama.cpp/examples/grpc-server/clip.h
|
||||
cp -rfv llama.cpp/examples/llava/clip.cpp llama.cpp/examples/grpc-server/clip.cpp
|
||||
cp -rfv llama.cpp/examples/llava/llava.cpp llama.cpp/examples/grpc-server/llava.cpp
|
||||
# Prepend llama.h include to llava.h
|
||||
echo '#include "llama.h"' > llama.cpp/examples/grpc-server/llava.h
|
||||
cat llama.cpp/examples/llava/llava.h >> llama.cpp/examples/grpc-server/llava.h
|
||||
# Copy clip-impl.h if it exists
|
||||
if [ -f llama.cpp/examples/llava/clip-impl.h ]; then
|
||||
cp -rfv llama.cpp/examples/llava/clip-impl.h llama.cpp/examples/grpc-server/clip-impl.h
|
||||
fi
|
||||
# Copy stb_image.h
|
||||
if [ -f llama.cpp/vendor/stb/stb_image.h ]; then
|
||||
cp -rfv llama.cpp/vendor/stb/stb_image.h llama.cpp/examples/grpc-server/stb_image.h
|
||||
elif [ -f llama.cpp/common/stb_image.h ]; then
|
||||
cp -rfv llama.cpp/common/stb_image.h llama.cpp/examples/grpc-server/stb_image.h
|
||||
fi
|
||||
|
||||
## Fix API compatibility in llava.cpp (llama_n_embd -> llama_model_n_embd)
|
||||
if [ -f llama.cpp/examples/grpc-server/llava.cpp ]; then
|
||||
sed -i 's/llama_n_embd(/llama_model_n_embd(/g' llama.cpp/examples/grpc-server/llava.cpp
|
||||
fi
|
||||
|
||||
set +e
|
||||
if grep -q "grpc-server" llama.cpp/examples/CMakeLists.txt; then
|
||||
echo "grpc-server already added"
|
||||
else
|
||||
echo "add_subdirectory(grpc-server)" >> llama.cpp/examples/CMakeLists.txt
|
||||
fi
|
||||
set -e
|
||||
40
backend/cpp/ik-llama-cpp/run.sh
Normal file
40
backend/cpp/ik-llama-cpp/run.sh
Normal file
|
|
@ -0,0 +1,40 @@
|
|||
#!/bin/bash
|
||||
set -ex
|
||||
|
||||
# Get the absolute current dir where the script is located
|
||||
CURDIR=$(dirname "$(realpath $0)")
|
||||
|
||||
cd /
|
||||
|
||||
echo "CPU info:"
|
||||
grep -e "model\sname" /proc/cpuinfo | head -1
|
||||
grep -e "flags" /proc/cpuinfo | head -1
|
||||
|
||||
# ik_llama.cpp requires AVX2 — default to avx2 binary
|
||||
BINARY=ik-llama-cpp-avx2
|
||||
|
||||
if [ -e $CURDIR/ik-llama-cpp-fallback ] && ! grep -q -e "\savx2\s" /proc/cpuinfo ; then
|
||||
echo "CPU: AVX2 NOT found, using fallback"
|
||||
BINARY=ik-llama-cpp-fallback
|
||||
fi
|
||||
|
||||
# Extend ld library path with the dir where this script is located/lib
|
||||
if [ "$(uname)" == "Darwin" ]; then
|
||||
export DYLD_LIBRARY_PATH=$CURDIR/lib:$DYLD_LIBRARY_PATH
|
||||
#export DYLD_FALLBACK_LIBRARY_PATH=$CURDIR/lib:$DYLD_FALLBACK_LIBRARY_PATH
|
||||
else
|
||||
export LD_LIBRARY_PATH=$CURDIR/lib:$LD_LIBRARY_PATH
|
||||
fi
|
||||
|
||||
# If there is a lib/ld.so, use it
|
||||
if [ -f $CURDIR/lib/ld.so ]; then
|
||||
echo "Using lib/ld.so"
|
||||
echo "Using binary: $BINARY"
|
||||
exec $CURDIR/lib/ld.so $CURDIR/$BINARY "$@"
|
||||
fi
|
||||
|
||||
echo "Using binary: $BINARY"
|
||||
exec $CURDIR/$BINARY "$@"
|
||||
|
||||
# We should never reach this point, however just in case we do, run fallback
|
||||
exec $CURDIR/ik-llama-cpp-fallback "$@"
|
||||
483
backend/cpp/ik-llama-cpp/utils.hpp
Normal file
483
backend/cpp/ik-llama-cpp/utils.hpp
Normal file
|
|
@ -0,0 +1,483 @@
|
|||
// https://github.com/ggerganov/llama.cpp/blob/master/examples/server/utils.hpp
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <string>
|
||||
#include <vector>
|
||||
#include <set>
|
||||
#include <mutex>
|
||||
#include <condition_variable>
|
||||
#include <unordered_map>
|
||||
|
||||
#include "json.hpp"
|
||||
|
||||
#include "clip.h"
|
||||
|
||||
using json = nlohmann::json;
|
||||
|
||||
extern bool server_verbose;
|
||||
|
||||
#ifndef SERVER_VERBOSE
|
||||
#define SERVER_VERBOSE 1
|
||||
#endif
|
||||
|
||||
#if SERVER_VERBOSE != 1
|
||||
#define LOG_VERBOSE(MSG, ...)
|
||||
#else
|
||||
#define LOG_VERBOSE(MSG, ...) \
|
||||
do \
|
||||
{ \
|
||||
if (server_verbose) \
|
||||
{ \
|
||||
server_log("VERBOSE", __func__, __LINE__, MSG, __VA_ARGS__); \
|
||||
} \
|
||||
} while (0)
|
||||
#endif
|
||||
|
||||
#define LOG_ERROR( MSG, ...) server_log("ERROR", __func__, __LINE__, MSG, __VA_ARGS__)
|
||||
#define LOG_WARNING(MSG, ...) server_log("WARNING", __func__, __LINE__, MSG, __VA_ARGS__)
|
||||
#define LOG_INFO( MSG, ...) server_log("INFO", __func__, __LINE__, MSG, __VA_ARGS__)
|
||||
|
||||
//
|
||||
// parallel
|
||||
//
|
||||
|
||||
enum server_state {
|
||||
SERVER_STATE_LOADING_MODEL, // Server is starting up, model not fully loaded yet
|
||||
SERVER_STATE_READY, // Server is ready and model is loaded
|
||||
SERVER_STATE_ERROR // An error occurred, load_model failed
|
||||
};
|
||||
|
||||
enum task_type {
|
||||
TASK_TYPE_COMPLETION,
|
||||
TASK_TYPE_CANCEL,
|
||||
TASK_TYPE_NEXT_RESPONSE
|
||||
};
|
||||
|
||||
struct task_server {
|
||||
int id = -1; // to be filled by llama_server_queue
|
||||
int target_id;
|
||||
task_type type;
|
||||
json data;
|
||||
bool infill_mode = false;
|
||||
bool embedding_mode = false;
|
||||
int multitask_id = -1;
|
||||
};
|
||||
|
||||
struct task_result {
|
||||
int id;
|
||||
int multitask_id = -1;
|
||||
bool stop;
|
||||
bool error;
|
||||
json result_json;
|
||||
};
|
||||
|
||||
struct task_multi {
|
||||
int id;
|
||||
std::set<int> subtasks_remaining{};
|
||||
std::vector<task_result> results{};
|
||||
};
|
||||
|
||||
// TODO: can become bool if we can't find use of more states
|
||||
enum slot_state
|
||||
{
|
||||
IDLE,
|
||||
PROCESSING,
|
||||
};
|
||||
|
||||
enum slot_command
|
||||
{
|
||||
NONE,
|
||||
LOAD_PROMPT,
|
||||
RELEASE,
|
||||
};
|
||||
|
||||
struct slot_params
|
||||
{
|
||||
bool stream = true;
|
||||
bool cache_prompt = false; // remember the prompt to avoid reprocessing all prompt
|
||||
|
||||
uint32_t seed = -1; // RNG seed
|
||||
int32_t n_keep = 0; // number of tokens to keep from initial prompt
|
||||
int32_t n_predict = -1; // new tokens to predict
|
||||
|
||||
std::vector<std::string> antiprompt;
|
||||
|
||||
json input_prefix;
|
||||
json input_suffix;
|
||||
};
|
||||
|
||||
struct slot_image
|
||||
{
|
||||
int32_t id;
|
||||
|
||||
bool request_encode_image = false;
|
||||
float * image_embedding = nullptr;
|
||||
int32_t image_tokens = 0;
|
||||
|
||||
clip_image_u8 * img_data;
|
||||
|
||||
std::string prefix_prompt; // before of this image
|
||||
};
|
||||
|
||||
// completion token output with probabilities
|
||||
struct completion_token_output
|
||||
{
|
||||
struct token_prob
|
||||
{
|
||||
llama_token tok;
|
||||
float prob;
|
||||
};
|
||||
|
||||
std::vector<token_prob> probs;
|
||||
llama_token tok;
|
||||
std::string text_to_send;
|
||||
};
|
||||
|
||||
static inline void server_log(const char *level, const char *function, int line,
|
||||
const char *message, const nlohmann::ordered_json &extra)
|
||||
{
|
||||
nlohmann::ordered_json log
|
||||
{
|
||||
{"timestamp", time(nullptr)},
|
||||
{"level", level},
|
||||
{"function", function},
|
||||
{"line", line},
|
||||
{"message", message},
|
||||
};
|
||||
|
||||
if (!extra.empty())
|
||||
{
|
||||
log.merge_patch(extra);
|
||||
}
|
||||
|
||||
const std::string str = log.dump(-1, ' ', false, json::error_handler_t::replace);
|
||||
printf("%.*s\n", (int)str.size(), str.data());
|
||||
fflush(stdout);
|
||||
}
|
||||
|
||||
//
|
||||
// server utils
|
||||
//
|
||||
|
||||
template <typename T>
|
||||
static T json_value(const json &body, const std::string &key, const T &default_value)
|
||||
{
|
||||
// Fallback null to default value
|
||||
return body.contains(key) && !body.at(key).is_null()
|
||||
? body.value(key, default_value)
|
||||
: default_value;
|
||||
}
|
||||
|
||||
inline std::string format_chatml(std::vector<json> messages)
|
||||
{
|
||||
std::ostringstream chatml_msgs;
|
||||
|
||||
for (auto it = messages.begin(); it != messages.end(); ++it) {
|
||||
chatml_msgs << "<|im_start|>"
|
||||
<< json_value(*it, "role", std::string("user")) << '\n';
|
||||
chatml_msgs << json_value(*it, "content", std::string(""))
|
||||
<< "<|im_end|>\n";
|
||||
}
|
||||
|
||||
chatml_msgs << "<|im_start|>assistant" << '\n';
|
||||
|
||||
return chatml_msgs.str();
|
||||
}
|
||||
|
||||
//
|
||||
// work queue utils
|
||||
//
|
||||
|
||||
struct llama_server_queue {
|
||||
int id = 0;
|
||||
std::mutex mutex_tasks;
|
||||
// queues
|
||||
std::vector<task_server> queue_tasks;
|
||||
std::vector<task_server> queue_tasks_deferred;
|
||||
std::vector<task_multi> queue_multitasks;
|
||||
std::condition_variable condition_tasks;
|
||||
// callback functions
|
||||
std::function<void(task_server&)> callback_new_task;
|
||||
std::function<void(task_multi&)> callback_finish_multitask;
|
||||
std::function<void(void)> callback_all_task_finished;
|
||||
|
||||
// Add a new task to the end of the queue
|
||||
int post(task_server task) {
|
||||
std::unique_lock<std::mutex> lock(mutex_tasks);
|
||||
if (task.id == -1) {
|
||||
task.id = id++;
|
||||
}
|
||||
queue_tasks.push_back(std::move(task));
|
||||
condition_tasks.notify_one();
|
||||
return task.id;
|
||||
}
|
||||
|
||||
// Add a new task, but defer until one slot is available
|
||||
void defer(task_server task) {
|
||||
std::unique_lock<std::mutex> lock(mutex_tasks);
|
||||
queue_tasks_deferred.push_back(std::move(task));
|
||||
}
|
||||
|
||||
// Get the next id for creating anew task
|
||||
int get_new_id() {
|
||||
std::unique_lock<std::mutex> lock(mutex_tasks);
|
||||
return id++;
|
||||
}
|
||||
|
||||
// Register function to process a new task
|
||||
void on_new_task(std::function<void(task_server&)> callback) {
|
||||
callback_new_task = callback;
|
||||
}
|
||||
|
||||
// Register function to process a multitask
|
||||
void on_finish_multitask(std::function<void(task_multi&)> callback) {
|
||||
callback_finish_multitask = callback;
|
||||
}
|
||||
|
||||
// Register the function to be called when the batch of tasks is finished
|
||||
void on_all_tasks_finished(std::function<void(void)> callback) {
|
||||
callback_all_task_finished = callback;
|
||||
}
|
||||
|
||||
// Call when the state of one slot is changed
|
||||
void notify_slot_changed() {
|
||||
// move deferred tasks back to main loop
|
||||
std::unique_lock<std::mutex> lock(mutex_tasks);
|
||||
for (auto & task : queue_tasks_deferred) {
|
||||
queue_tasks.push_back(std::move(task));
|
||||
}
|
||||
queue_tasks_deferred.clear();
|
||||
}
|
||||
|
||||
// Start the main loop. This call is blocking
|
||||
[[noreturn]]
|
||||
void start_loop() {
|
||||
while (true) {
|
||||
// new task arrived
|
||||
LOG_VERBOSE("have new task", {});
|
||||
{
|
||||
while (true)
|
||||
{
|
||||
std::unique_lock<std::mutex> lock(mutex_tasks);
|
||||
if (queue_tasks.empty()) {
|
||||
lock.unlock();
|
||||
break;
|
||||
}
|
||||
task_server task = queue_tasks.front();
|
||||
queue_tasks.erase(queue_tasks.begin());
|
||||
lock.unlock();
|
||||
LOG_VERBOSE("callback_new_task", {});
|
||||
callback_new_task(task);
|
||||
}
|
||||
LOG_VERBOSE("callback_all_task_finished", {});
|
||||
// process and update all the multitasks
|
||||
auto queue_iterator = queue_multitasks.begin();
|
||||
while (queue_iterator != queue_multitasks.end())
|
||||
{
|
||||
if (queue_iterator->subtasks_remaining.empty())
|
||||
{
|
||||
// all subtasks done == multitask is done
|
||||
task_multi current_multitask = *queue_iterator;
|
||||
callback_finish_multitask(current_multitask);
|
||||
// remove this multitask
|
||||
queue_iterator = queue_multitasks.erase(queue_iterator);
|
||||
}
|
||||
else
|
||||
{
|
||||
++queue_iterator;
|
||||
}
|
||||
}
|
||||
// all tasks in the current loop is finished
|
||||
callback_all_task_finished();
|
||||
}
|
||||
LOG_VERBOSE("wait for new task", {});
|
||||
// wait for new task
|
||||
{
|
||||
std::unique_lock<std::mutex> lock(mutex_tasks);
|
||||
if (queue_tasks.empty()) {
|
||||
condition_tasks.wait(lock, [&]{
|
||||
return !queue_tasks.empty();
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//
|
||||
// functions to manage multitasks
|
||||
//
|
||||
|
||||
// add a multitask by specifying the id of all subtask (subtask is a task_server)
|
||||
void add_multitask(int multitask_id, std::vector<int>& sub_ids)
|
||||
{
|
||||
std::lock_guard<std::mutex> lock(mutex_tasks);
|
||||
task_multi multi;
|
||||
multi.id = multitask_id;
|
||||
std::copy(sub_ids.begin(), sub_ids.end(), std::inserter(multi.subtasks_remaining, multi.subtasks_remaining.end()));
|
||||
queue_multitasks.push_back(multi);
|
||||
}
|
||||
|
||||
// updatethe remaining subtasks, while appending results to multitask
|
||||
void update_multitask(int multitask_id, int subtask_id, task_result& result)
|
||||
{
|
||||
std::lock_guard<std::mutex> lock(mutex_tasks);
|
||||
for (auto& multitask : queue_multitasks)
|
||||
{
|
||||
if (multitask.id == multitask_id)
|
||||
{
|
||||
multitask.subtasks_remaining.erase(subtask_id);
|
||||
multitask.results.push_back(result);
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
struct llama_server_response {
|
||||
typedef std::function<void(int, int, task_result&)> callback_multitask_t;
|
||||
callback_multitask_t callback_update_multitask;
|
||||
// for keeping track of all tasks waiting for the result
|
||||
std::set<int> waiting_task_ids;
|
||||
// the main result queue
|
||||
std::vector<task_result> queue_results;
|
||||
std::mutex mutex_results;
|
||||
std::condition_variable condition_results;
|
||||
|
||||
void add_waiting_task_id(int task_id) {
|
||||
std::unique_lock<std::mutex> lock(mutex_results);
|
||||
waiting_task_ids.insert(task_id);
|
||||
}
|
||||
|
||||
void remove_waiting_task_id(int task_id) {
|
||||
std::unique_lock<std::mutex> lock(mutex_results);
|
||||
waiting_task_ids.erase(task_id);
|
||||
}
|
||||
|
||||
// This function blocks the thread until there is a response for this task_id
|
||||
task_result recv(int task_id) {
|
||||
while (true)
|
||||
{
|
||||
std::unique_lock<std::mutex> lock(mutex_results);
|
||||
condition_results.wait(lock, [&]{
|
||||
return !queue_results.empty();
|
||||
});
|
||||
LOG_VERBOSE("condition_results unblock", {});
|
||||
|
||||
for (int i = 0; i < (int) queue_results.size(); i++)
|
||||
{
|
||||
if (queue_results[i].id == task_id)
|
||||
{
|
||||
assert(queue_results[i].multitask_id == -1);
|
||||
task_result res = queue_results[i];
|
||||
queue_results.erase(queue_results.begin() + i);
|
||||
return res;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// should never reach here
|
||||
}
|
||||
|
||||
// Register the function to update multitask
|
||||
void on_multitask_update(callback_multitask_t callback) {
|
||||
callback_update_multitask = callback;
|
||||
}
|
||||
|
||||
// Send a new result to a waiting task_id
|
||||
void send(task_result result) {
|
||||
std::unique_lock<std::mutex> lock(mutex_results);
|
||||
LOG_VERBOSE("send new result", {});
|
||||
for (auto& task_id : waiting_task_ids) {
|
||||
// LOG_TEE("waiting task id %i \n", task_id);
|
||||
// for now, tasks that have associated parent multitasks just get erased once multitask picks up the result
|
||||
if (result.multitask_id == task_id)
|
||||
{
|
||||
LOG_VERBOSE("callback_update_multitask", {});
|
||||
callback_update_multitask(task_id, result.id, result);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (result.id == task_id)
|
||||
{
|
||||
LOG_VERBOSE("queue_results.push_back", {});
|
||||
queue_results.push_back(result);
|
||||
condition_results.notify_one();
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
//
|
||||
// base64 utils (TODO: move to common in the future)
|
||||
//
|
||||
|
||||
static const std::string base64_chars =
|
||||
"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
|
||||
"abcdefghijklmnopqrstuvwxyz"
|
||||
"0123456789+/";
|
||||
|
||||
static inline bool is_base64(uint8_t c)
|
||||
{
|
||||
return (isalnum(c) || (c == '+') || (c == '/'));
|
||||
}
|
||||
|
||||
static inline std::vector<uint8_t> base64_decode(const std::string & encoded_string)
|
||||
{
|
||||
int i = 0;
|
||||
int j = 0;
|
||||
int in_ = 0;
|
||||
|
||||
int in_len = encoded_string.size();
|
||||
|
||||
uint8_t char_array_4[4];
|
||||
uint8_t char_array_3[3];
|
||||
|
||||
std::vector<uint8_t> ret;
|
||||
|
||||
while (in_len-- && (encoded_string[in_] != '=') && is_base64(encoded_string[in_]))
|
||||
{
|
||||
char_array_4[i++] = encoded_string[in_]; in_++;
|
||||
if (i == 4)
|
||||
{
|
||||
for (i = 0; i <4; i++)
|
||||
{
|
||||
char_array_4[i] = base64_chars.find(char_array_4[i]);
|
||||
}
|
||||
|
||||
char_array_3[0] = ((char_array_4[0] ) << 2) + ((char_array_4[1] & 0x30) >> 4);
|
||||
char_array_3[1] = ((char_array_4[1] & 0xf) << 4) + ((char_array_4[2] & 0x3c) >> 2);
|
||||
char_array_3[2] = ((char_array_4[2] & 0x3) << 6) + char_array_4[3];
|
||||
|
||||
for (i = 0; (i < 3); i++)
|
||||
{
|
||||
ret.push_back(char_array_3[i]);
|
||||
}
|
||||
i = 0;
|
||||
}
|
||||
}
|
||||
|
||||
if (i)
|
||||
{
|
||||
for (j = i; j <4; j++)
|
||||
{
|
||||
char_array_4[j] = 0;
|
||||
}
|
||||
|
||||
for (j = 0; j <4; j++)
|
||||
{
|
||||
char_array_4[j] = base64_chars.find(char_array_4[j]);
|
||||
}
|
||||
|
||||
char_array_3[0] = ((char_array_4[0] ) << 2) + ((char_array_4[1] & 0x30) >> 4);
|
||||
char_array_3[1] = ((char_array_4[1] & 0xf) << 4) + ((char_array_4[2] & 0x3c) >> 2);
|
||||
char_array_3[2] = ((char_array_4[2] & 0x3) << 6) + char_array_4[3];
|
||||
|
||||
for (j = 0; (j < i - 1); j++)
|
||||
{
|
||||
ret.push_back(char_array_3[j]);
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
@ -62,7 +62,18 @@ add_executable(${TARGET} grpc-server.cpp json.hpp httplib.h)
|
|||
target_include_directories(${TARGET} PRIVATE ../llava)
|
||||
target_include_directories(${TARGET} PRIVATE ${CMAKE_SOURCE_DIR})
|
||||
|
||||
target_link_libraries(${TARGET} PRIVATE common llama mtmd ${CMAKE_THREAD_LIBS_INIT} absl::flags hw_grpc_proto
|
||||
# Upstream llama.cpp renamed the `common` helpers library to `llama-common`.
|
||||
# Forks that branched before the rename (e.g. llama-cpp-turboquant) still
|
||||
# expose it as `common`. Detect which one is present so the same CMakeLists
|
||||
# drives both builds — otherwise an unresolved name silently degrades to a
|
||||
# plain `-l` flag and the PUBLIC include dir (where common.h lives) is lost.
|
||||
if (TARGET llama-common)
|
||||
set(_LLAMA_COMMON_TARGET llama-common)
|
||||
else()
|
||||
set(_LLAMA_COMMON_TARGET common)
|
||||
endif()
|
||||
|
||||
target_link_libraries(${TARGET} PRIVATE ${_LLAMA_COMMON_TARGET} llama mtmd ${CMAKE_THREAD_LIBS_INIT} absl::flags hw_grpc_proto
|
||||
absl::flags_parse
|
||||
gRPC::${_REFLECTION}
|
||||
gRPC::${_GRPC_GRPCPP}
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
|
||||
LLAMA_VERSION?=d0a6dfeb28a09831d904fc4d910ddb740da82834
|
||||
LLAMA_VERSION?=cf8b0dbda9ac0eac30ee33f87bc6702ead1c4664
|
||||
LLAMA_REPO?=https://github.com/ggerganov/llama.cpp
|
||||
|
||||
CMAKE_ARGS?=
|
||||
|
|
@ -33,7 +33,7 @@ else ifeq ($(BUILD_TYPE),hipblas)
|
|||
ROCM_PATH ?= /opt/rocm
|
||||
export CXX=$(ROCM_HOME)/llvm/bin/clang++
|
||||
export CC=$(ROCM_HOME)/llvm/bin/clang
|
||||
AMDGPU_TARGETS?=gfx803,gfx900,gfx906,gfx908,gfx90a,gfx942,gfx1010,gfx1030,gfx1032,gfx1100,gfx1101,gfx1102,gfx1200,gfx1201
|
||||
AMDGPU_TARGETS?=gfx908,gfx90a,gfx942,gfx950,gfx1030,gfx1100,gfx1101,gfx1102,gfx1151,gfx1200,gfx1201
|
||||
CMAKE_ARGS+=-DGGML_HIP=ON -DAMDGPU_TARGETS=$(AMDGPU_TARGETS)
|
||||
else ifeq ($(BUILD_TYPE),vulkan)
|
||||
CMAKE_ARGS+=-DGGML_VULKAN=1
|
||||
|
|
@ -132,7 +132,7 @@ llama.cpp:
|
|||
cd llama.cpp && \
|
||||
git init && \
|
||||
git remote add origin $(LLAMA_REPO) && \
|
||||
git fetch origin && \
|
||||
git fetch --all --tags && \
|
||||
git checkout -b build $(LLAMA_VERSION) && \
|
||||
git submodule update --init --recursive --depth 1 --single-branch
|
||||
|
||||
|
|
|
|||
|
|
@ -26,6 +26,8 @@
|
|||
#include <regex>
|
||||
#include <atomic>
|
||||
#include <cstdlib>
|
||||
#include <fstream>
|
||||
#include <iterator>
|
||||
#include <mutex>
|
||||
#include <signal.h>
|
||||
#include <thread>
|
||||
|
|
@ -76,6 +78,27 @@ static grpc::Status checkAuth(grpc::ServerContext* context) {
|
|||
return grpc::Status(grpc::StatusCode::UNAUTHENTICATED, "invalid token");
|
||||
}
|
||||
|
||||
// Minimal base64 encoder. The C++ backend already pulls in base64_decode from
|
||||
// llama.cpp's server-common.cpp, but no encoder is exposed — and we need one to
|
||||
// hand audio bytes to the existing PredictOptions.audios path (which expects
|
||||
// base64-encoded strings, just like images).
|
||||
static std::string base64_encode_bytes(const unsigned char* data, size_t len) {
|
||||
static const char tbl[] =
|
||||
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
|
||||
std::string out;
|
||||
out.reserve(((len + 2) / 3) * 4);
|
||||
for (size_t i = 0; i < len; i += 3) {
|
||||
uint32_t triple = (uint32_t(data[i]) << 16);
|
||||
if (i + 1 < len) triple |= (uint32_t(data[i + 1]) << 8);
|
||||
if (i + 2 < len) triple |= uint32_t(data[i + 2]);
|
||||
out.push_back(tbl[(triple >> 18) & 0x3F]);
|
||||
out.push_back(tbl[(triple >> 12) & 0x3F]);
|
||||
out.push_back(i + 1 < len ? tbl[(triple >> 6) & 0x3F] : '=');
|
||||
out.push_back(i + 2 < len ? tbl[triple & 0x3F] : '=');
|
||||
}
|
||||
return out;
|
||||
}
|
||||
|
||||
// END LocalAI
|
||||
|
||||
|
||||
|
|
@ -1614,6 +1637,7 @@ public:
|
|||
ctx_server.impl->vocab,
|
||||
params_base,
|
||||
ctx_server.get_meta().slot_n_ctx,
|
||||
ctx_server.get_meta().logit_bias_eog,
|
||||
data);
|
||||
task.id_slot = json_value(data, "id_slot", -1);
|
||||
|
||||
|
|
@ -1715,12 +1739,23 @@ public:
|
|||
}
|
||||
};
|
||||
|
||||
// Process first result
|
||||
// Process first result.
|
||||
// When TASK_RESPONSE_TYPE_OAI_CHAT is used, the first token may
|
||||
// produce a JSON array with a role-init element followed by the
|
||||
// actual content element. We must only attach chat deltas to the
|
||||
// content element — attaching to both would duplicate the first
|
||||
// token since oaicompat_msg_diffs is the same for both.
|
||||
json first_res_json = first_result->to_json();
|
||||
if (first_res_json.is_array()) {
|
||||
for (const auto & res : first_res_json) {
|
||||
auto reply = build_reply_from_json(res, first_result.get());
|
||||
attach_chat_deltas(reply, first_result.get());
|
||||
// Skip chat deltas for role-init elements (have "role" in
|
||||
// delta but no content/reasoning diffs of their own).
|
||||
bool is_role_init = res.contains("choices") && !res["choices"].empty() &&
|
||||
res["choices"][0].value("delta", json::object()).contains("role");
|
||||
if (!is_role_init) {
|
||||
attach_chat_deltas(reply, first_result.get());
|
||||
}
|
||||
writer->Write(reply);
|
||||
}
|
||||
} else {
|
||||
|
|
@ -1744,7 +1779,11 @@ public:
|
|||
if (res_json.is_array()) {
|
||||
for (const auto & res : res_json) {
|
||||
auto reply = build_reply_from_json(res, result.get());
|
||||
attach_chat_deltas(reply, result.get());
|
||||
bool is_role_init = res.contains("choices") && !res["choices"].empty() &&
|
||||
res["choices"][0].value("delta", json::object()).contains("role");
|
||||
if (!is_role_init) {
|
||||
attach_chat_deltas(reply, result.get());
|
||||
}
|
||||
writer->Write(reply);
|
||||
}
|
||||
} else {
|
||||
|
|
@ -2382,6 +2421,7 @@ public:
|
|||
ctx_server.impl->vocab,
|
||||
params_base,
|
||||
ctx_server.get_meta().slot_n_ctx,
|
||||
ctx_server.get_meta().logit_bias_eog,
|
||||
data);
|
||||
task.id_slot = json_value(data, "id_slot", -1);
|
||||
|
||||
|
|
@ -2774,6 +2814,13 @@ public:
|
|||
return grpc::Status(grpc::StatusCode::FAILED_PRECONDITION, "Model not loaded");
|
||||
}
|
||||
|
||||
// Report the active multimodal media marker so the Go layer can emit the
|
||||
// same string when rendering prompts outside the tokenizer-template path.
|
||||
// Only meaningful when an mtmd context was initialized (vision/audio models).
|
||||
if (ctx_server.impl->mctx != nullptr) {
|
||||
response->set_media_marker(get_media_marker());
|
||||
}
|
||||
|
||||
// Check if chat templates are initialized
|
||||
if (ctx_server.impl->chat_params.tmpls == nullptr) {
|
||||
// If templates are not initialized, we can't detect thinking support
|
||||
|
|
@ -2914,6 +2961,119 @@ public:
|
|||
|
||||
return grpc::Status::OK;
|
||||
}
|
||||
|
||||
// runTranscriptionAsCompletion implements OAI /v1/audio/transcriptions on
|
||||
// top of the existing chat-completion + multimodal-audio pipeline, exactly
|
||||
// the way upstream llama.cpp's server does it (see
|
||||
// tools/server/server-context.cpp post_transcriptions_oai → forwards into
|
||||
// handle_completions_impl with a single user message attaching the audio
|
||||
// file via the mtmd marker).
|
||||
//
|
||||
// We synthesize a backend::PredictOptions with one user message
|
||||
// ("Transcribe audio to text" + optional language hint) and the audio
|
||||
// bytes attached via the existing PredictOptions.audios field, then
|
||||
// delegate to our own Predict() handler. This keeps every multimodal
|
||||
// codepath identical to the chat path and avoids duplicating ~700 lines
|
||||
// of task-construction logic.
|
||||
grpc::Status runTranscriptionAsCompletion(grpc::ServerContext* context,
|
||||
const backend::TranscriptRequest* request,
|
||||
backend::Reply* out_reply) {
|
||||
if (params_base.model.path.empty()) {
|
||||
return grpc::Status(grpc::StatusCode::FAILED_PRECONDITION, "Model not loaded");
|
||||
}
|
||||
if (request->dst().empty()) {
|
||||
return grpc::Status(grpc::StatusCode::INVALID_ARGUMENT, "dst (audio file path) is required");
|
||||
}
|
||||
|
||||
// Read audio bytes from the path LocalAI's HTTP layer wrote.
|
||||
std::ifstream f(request->dst(), std::ios::binary);
|
||||
if (!f.is_open()) {
|
||||
return grpc::Status(grpc::StatusCode::INVALID_ARGUMENT, "failed to open audio file: " + request->dst());
|
||||
}
|
||||
std::vector<unsigned char> bytes((std::istreambuf_iterator<char>(f)),
|
||||
std::istreambuf_iterator<char>());
|
||||
f.close();
|
||||
if (bytes.empty()) {
|
||||
return grpc::Status(grpc::StatusCode::INVALID_ARGUMENT, "audio file is empty: " + request->dst());
|
||||
}
|
||||
|
||||
std::string b64 = base64_encode_bytes(bytes.data(), bytes.size());
|
||||
|
||||
// Build the same prompt upstream uses in convert_transcriptions_to_chatcmpl.
|
||||
std::string user_prompt = "Transcribe audio to text";
|
||||
if (!request->language().empty()) {
|
||||
user_prompt += " (language: " + request->language() + ")";
|
||||
}
|
||||
if (!request->prompt().empty()) {
|
||||
// Optional context hint from the caller.
|
||||
user_prompt += "\n" + request->prompt();
|
||||
}
|
||||
|
||||
backend::PredictOptions synthetic;
|
||||
synthetic.set_usetokenizertemplate(true);
|
||||
synthetic.set_temperature(request->temperature());
|
||||
// Generation length: leave at 0 so parse_options uses -1 (model default).
|
||||
// The model's stop tokens / EOS handle termination naturally for ASR.
|
||||
backend::Message* msg = synthetic.add_messages();
|
||||
msg->set_role("user");
|
||||
msg->set_content(user_prompt);
|
||||
synthetic.add_audios(b64);
|
||||
|
||||
return Predict(context, &synthetic, out_reply);
|
||||
}
|
||||
|
||||
grpc::Status AudioTranscription(ServerContext* context,
|
||||
const backend::TranscriptRequest* request,
|
||||
backend::TranscriptResult* response) override {
|
||||
auto auth = checkAuth(context);
|
||||
if (!auth.ok()) return auth;
|
||||
|
||||
backend::Reply reply;
|
||||
grpc::Status st = runTranscriptionAsCompletion(context, request, &reply);
|
||||
if (!st.ok()) {
|
||||
return st;
|
||||
}
|
||||
response->set_text(reply.message());
|
||||
if (!request->language().empty()) {
|
||||
response->set_language(request->language());
|
||||
}
|
||||
return grpc::Status::OK;
|
||||
}
|
||||
|
||||
grpc::Status AudioTranscriptionStream(ServerContext* context,
|
||||
const backend::TranscriptRequest* request,
|
||||
grpc::ServerWriter<backend::TranscriptStreamResponse>* writer) override {
|
||||
auto auth = checkAuth(context);
|
||||
if (!auth.ok()) return auth;
|
||||
|
||||
// Buffered streaming: run the transcription as a normal chat
|
||||
// completion, then emit one delta + one final event. Real
|
||||
// token-by-token streaming would require refactoring PredictStream's
|
||||
// 700-line writer-coupled body; the HTTP/SSE contract is identical
|
||||
// either way, and clients that only consume the assembled text don't
|
||||
// notice the difference.
|
||||
backend::Reply reply;
|
||||
grpc::Status st = runTranscriptionAsCompletion(context, request, &reply);
|
||||
if (!st.ok()) {
|
||||
return st;
|
||||
}
|
||||
|
||||
const std::string& text = reply.message();
|
||||
if (!text.empty()) {
|
||||
backend::TranscriptStreamResponse delta_chunk;
|
||||
delta_chunk.set_delta(text);
|
||||
writer->Write(delta_chunk);
|
||||
}
|
||||
|
||||
backend::TranscriptStreamResponse final_chunk;
|
||||
backend::TranscriptResult* final_result = final_chunk.mutable_final_result();
|
||||
final_result->set_text(text);
|
||||
if (!request->language().empty()) {
|
||||
final_result->set_language(request->language());
|
||||
}
|
||||
writer->Write(final_chunk);
|
||||
return grpc::Status::OK;
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -46,6 +46,10 @@ if [ "$(uname)" == "Darwin" ]; then
|
|||
#export DYLD_FALLBACK_LIBRARY_PATH=$CURDIR/lib:$DYLD_FALLBACK_LIBRARY_PATH
|
||||
else
|
||||
export LD_LIBRARY_PATH=$CURDIR/lib:$LD_LIBRARY_PATH
|
||||
# Tell rocBLAS where to find TensileLibrary data (GPU kernel tuning files)
|
||||
if [ -d "$CURDIR/lib/rocblas/library" ]; then
|
||||
export ROCBLAS_TENSILE_LIBPATH=$CURDIR/lib/rocblas/library
|
||||
fi
|
||||
fi
|
||||
|
||||
# If there is a lib/ld.so, use it
|
||||
|
|
|
|||
81
backend/cpp/turboquant/Makefile
Normal file
81
backend/cpp/turboquant/Makefile
Normal file
|
|
@ -0,0 +1,81 @@
|
|||
|
||||
# Pinned to the HEAD of feature/turboquant-kv-cache on https://github.com/TheTom/llama-cpp-turboquant.
|
||||
# Auto-bumped nightly by .github/workflows/bump_deps.yaml.
|
||||
TURBOQUANT_VERSION?=627ebbc6e27727bd4f65422d8aa60b13404993c8
|
||||
LLAMA_REPO?=https://github.com/TheTom/llama-cpp-turboquant
|
||||
|
||||
CMAKE_ARGS?=
|
||||
BUILD_TYPE?=
|
||||
NATIVE?=false
|
||||
ONEAPI_VARS?=/opt/intel/oneapi/setvars.sh
|
||||
TARGET?=--target grpc-server
|
||||
JOBS?=$(shell nproc 2>/dev/null || sysctl -n hw.ncpu 2>/dev/null || echo 1)
|
||||
ARCH?=$(shell uname -m)
|
||||
|
||||
CURRENT_MAKEFILE_DIR := $(dir $(abspath $(lastword $(MAKEFILE_LIST))))
|
||||
LLAMA_CPP_DIR := $(CURRENT_MAKEFILE_DIR)/../llama-cpp
|
||||
|
||||
GREEN := \033[0;32m
|
||||
RESET := \033[0m
|
||||
|
||||
# turboquant is a llama.cpp fork. Rather than duplicating grpc-server.cpp / CMakeLists.txt /
|
||||
# prepare.sh we reuse the ones in backend/cpp/llama-cpp, and only swap which repo+sha the
|
||||
# fetch step pulls. Each flavor target copies ../llama-cpp into a sibling ../turboquant-<flavor>-build
|
||||
# directory, then invokes llama-cpp's own build-llama-cpp-grpc-server with LLAMA_REPO/LLAMA_VERSION
|
||||
# overridden to point at the fork.
|
||||
PATCHES_DIR := $(CURRENT_MAKEFILE_DIR)/patches
|
||||
|
||||
# Each flavor target:
|
||||
# 1. copies backend/cpp/llama-cpp/ (grpc-server.cpp + prepare.sh + CMakeLists.txt + Makefile)
|
||||
# into a sibling turboquant-<flavor>-build directory;
|
||||
# 2. clones the turboquant fork into turboquant-<flavor>-build/llama.cpp via the copy's
|
||||
# own `llama.cpp` target, overriding LLAMA_REPO/LLAMA_VERSION;
|
||||
# 3. applies patches from backend/cpp/turboquant/patches/ to the cloned fork sources
|
||||
# (needed until the fork catches up with upstream server-context.cpp changes);
|
||||
# 4. runs the copy's `grpc-server` target, which produces the binary we copy up as
|
||||
# turboquant-<flavor>.
|
||||
define turboquant-build
|
||||
rm -rf $(CURRENT_MAKEFILE_DIR)/../turboquant-$(1)-build
|
||||
cp -rf $(LLAMA_CPP_DIR) $(CURRENT_MAKEFILE_DIR)/../turboquant-$(1)-build
|
||||
$(MAKE) -C $(CURRENT_MAKEFILE_DIR)/../turboquant-$(1)-build purge
|
||||
# Augment the copied grpc-server.cpp's KV-cache allow-list with the
|
||||
# fork's turbo2/turbo3/turbo4 types. We patch the *copy*, never the
|
||||
# original under backend/cpp/llama-cpp/, so the stock llama-cpp build
|
||||
# stays compiling against vanilla upstream.
|
||||
bash $(CURRENT_MAKEFILE_DIR)/patch-grpc-server.sh $(CURRENT_MAKEFILE_DIR)/../turboquant-$(1)-build/grpc-server.cpp
|
||||
$(info $(GREEN)I turboquant build info:$(1)$(RESET))
|
||||
LLAMA_REPO=$(LLAMA_REPO) LLAMA_VERSION=$(TURBOQUANT_VERSION) \
|
||||
$(MAKE) -C $(CURRENT_MAKEFILE_DIR)/../turboquant-$(1)-build llama.cpp
|
||||
bash $(CURRENT_MAKEFILE_DIR)/apply-patches.sh $(CURRENT_MAKEFILE_DIR)/../turboquant-$(1)-build/llama.cpp $(PATCHES_DIR)
|
||||
CMAKE_ARGS="$(CMAKE_ARGS) $(2)" TARGET="$(3)" \
|
||||
LLAMA_REPO=$(LLAMA_REPO) LLAMA_VERSION=$(TURBOQUANT_VERSION) \
|
||||
$(MAKE) -C $(CURRENT_MAKEFILE_DIR)/../turboquant-$(1)-build grpc-server
|
||||
cp -rfv $(CURRENT_MAKEFILE_DIR)/../turboquant-$(1)-build/grpc-server turboquant-$(1)
|
||||
endef
|
||||
|
||||
turboquant-avx2:
|
||||
$(call turboquant-build,avx2,-DGGML_AVX=on -DGGML_AVX2=on -DGGML_AVX512=off -DGGML_FMA=on -DGGML_F16C=on,--target grpc-server)
|
||||
|
||||
turboquant-avx512:
|
||||
$(call turboquant-build,avx512,-DGGML_AVX=on -DGGML_AVX2=off -DGGML_AVX512=on -DGGML_FMA=on -DGGML_F16C=on,--target grpc-server)
|
||||
|
||||
turboquant-avx:
|
||||
$(call turboquant-build,avx,-DGGML_AVX=on -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off -DGGML_BMI2=off,--target grpc-server)
|
||||
|
||||
turboquant-fallback:
|
||||
$(call turboquant-build,fallback,-DGGML_AVX=off -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off -DGGML_BMI2=off,--target grpc-server)
|
||||
|
||||
turboquant-grpc:
|
||||
$(call turboquant-build,grpc,-DGGML_RPC=ON -DGGML_AVX=off -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off -DGGML_BMI2=off,--target grpc-server --target rpc-server)
|
||||
|
||||
turboquant-rpc-server: turboquant-grpc
|
||||
cp -rf $(CURRENT_MAKEFILE_DIR)/../turboquant-grpc-build/llama.cpp/build/bin/rpc-server turboquant-rpc-server
|
||||
|
||||
package:
|
||||
bash package.sh
|
||||
|
||||
purge:
|
||||
rm -rf $(CURRENT_MAKEFILE_DIR)/../turboquant-*-build
|
||||
rm -rf turboquant-* package
|
||||
|
||||
clean: purge
|
||||
50
backend/cpp/turboquant/apply-patches.sh
Executable file
50
backend/cpp/turboquant/apply-patches.sh
Executable file
|
|
@ -0,0 +1,50 @@
|
|||
#!/bin/bash
|
||||
# Apply the turboquant patch series to a cloned llama-cpp-turboquant checkout.
|
||||
#
|
||||
# The turboquant fork branched from upstream llama.cpp before a few API changes
|
||||
# that the shared backend/cpp/llama-cpp/grpc-server.cpp depends on. We carry
|
||||
# those upstream commits as patch files under backend/cpp/turboquant/patches/
|
||||
# and apply them here so the reused grpc-server source compiles against the
|
||||
# fork unmodified.
|
||||
#
|
||||
# Drop the corresponding patch from patches/ whenever the fork catches up with
|
||||
# upstream — the build will fail fast if a patch stops applying, which is the
|
||||
# signal to retire it.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
if [[ $# -ne 2 ]]; then
|
||||
echo "usage: $0 <llama.cpp-src-dir> <patches-dir>" >&2
|
||||
exit 2
|
||||
fi
|
||||
|
||||
SRC_DIR=$1
|
||||
PATCHES_DIR=$2
|
||||
|
||||
if [[ ! -d "$SRC_DIR" ]]; then
|
||||
echo "source dir does not exist: $SRC_DIR" >&2
|
||||
exit 2
|
||||
fi
|
||||
|
||||
if [[ ! -d "$PATCHES_DIR" ]]; then
|
||||
echo "no patches dir at $PATCHES_DIR, nothing to apply"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
shopt -s nullglob
|
||||
patches=("$PATCHES_DIR"/*.patch)
|
||||
shopt -u nullglob
|
||||
|
||||
if [[ ${#patches[@]} -eq 0 ]]; then
|
||||
echo "no .patch files in $PATCHES_DIR, nothing to apply"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd "$SRC_DIR"
|
||||
|
||||
for patch in "${patches[@]}"; do
|
||||
echo "==> applying $patch"
|
||||
git apply --verbose "$patch"
|
||||
done
|
||||
|
||||
echo "all turboquant patches applied successfully"
|
||||
57
backend/cpp/turboquant/package.sh
Executable file
57
backend/cpp/turboquant/package.sh
Executable file
|
|
@ -0,0 +1,57 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Script to copy the appropriate libraries based on architecture
|
||||
# This script is used in the final stage of the Dockerfile
|
||||
|
||||
set -e
|
||||
|
||||
CURDIR=$(dirname "$(realpath $0)")
|
||||
REPO_ROOT="${CURDIR}/../../.."
|
||||
|
||||
# Create lib directory
|
||||
mkdir -p $CURDIR/package/lib
|
||||
|
||||
cp -avrf $CURDIR/turboquant-* $CURDIR/package/
|
||||
cp -rfv $CURDIR/run.sh $CURDIR/package/
|
||||
|
||||
# Detect architecture and copy appropriate libraries
|
||||
if [ -f "/lib64/ld-linux-x86-64.so.2" ]; then
|
||||
# x86_64 architecture
|
||||
echo "Detected x86_64 architecture, copying x86_64 libraries..."
|
||||
cp -arfLv /lib64/ld-linux-x86-64.so.2 $CURDIR/package/lib/ld.so
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libc.so.6 $CURDIR/package/lib/libc.so.6
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libgcc_s.so.1 $CURDIR/package/lib/libgcc_s.so.1
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libstdc++.so.6 $CURDIR/package/lib/libstdc++.so.6
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libm.so.6 $CURDIR/package/lib/libm.so.6
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libgomp.so.1 $CURDIR/package/lib/libgomp.so.1
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libdl.so.2 $CURDIR/package/lib/libdl.so.2
|
||||
cp -arfLv /lib/x86_64-linux-gnu/librt.so.1 $CURDIR/package/lib/librt.so.1
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libpthread.so.0 $CURDIR/package/lib/libpthread.so.0
|
||||
elif [ -f "/lib/ld-linux-aarch64.so.1" ]; then
|
||||
# ARM64 architecture
|
||||
echo "Detected ARM64 architecture, copying ARM64 libraries..."
|
||||
cp -arfLv /lib/ld-linux-aarch64.so.1 $CURDIR/package/lib/ld.so
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libc.so.6 $CURDIR/package/lib/libc.so.6
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libgcc_s.so.1 $CURDIR/package/lib/libgcc_s.so.1
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libstdc++.so.6 $CURDIR/package/lib/libstdc++.so.6
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libm.so.6 $CURDIR/package/lib/libm.so.6
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libgomp.so.1 $CURDIR/package/lib/libgomp.so.1
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libdl.so.2 $CURDIR/package/lib/libdl.so.2
|
||||
cp -arfLv /lib/aarch64-linux-gnu/librt.so.1 $CURDIR/package/lib/librt.so.1
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libpthread.so.0 $CURDIR/package/lib/libpthread.so.0
|
||||
else
|
||||
echo "Error: Could not detect architecture"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Package GPU libraries based on BUILD_TYPE
|
||||
GPU_LIB_SCRIPT="${REPO_ROOT}/scripts/build/package-gpu-libs.sh"
|
||||
if [ -f "$GPU_LIB_SCRIPT" ]; then
|
||||
echo "Packaging GPU libraries for BUILD_TYPE=${BUILD_TYPE:-cpu}..."
|
||||
source "$GPU_LIB_SCRIPT" "$CURDIR/package/lib"
|
||||
package_gpu_libs
|
||||
fi
|
||||
|
||||
echo "Packaging completed successfully"
|
||||
ls -liah $CURDIR/package/
|
||||
ls -liah $CURDIR/package/lib/
|
||||
80
backend/cpp/turboquant/patch-grpc-server.sh
Executable file
80
backend/cpp/turboquant/patch-grpc-server.sh
Executable file
|
|
@ -0,0 +1,80 @@
|
|||
#!/bin/bash
|
||||
# Patch the shared backend/cpp/llama-cpp/grpc-server.cpp *copy* used by the
|
||||
# turboquant build to account for two gaps between upstream and the fork:
|
||||
#
|
||||
# 1. Augment the kv_cache_types[] allow-list so `LoadModel` accepts the
|
||||
# fork-specific `turbo2` / `turbo3` / `turbo4` cache types.
|
||||
# 2. Replace `get_media_marker()` (added upstream in ggml-org/llama.cpp#21962,
|
||||
# server-side random per-instance marker) with the legacy "<__media__>"
|
||||
# literal. The fork branched before that PR, so server-common.cpp has no
|
||||
# get_media_marker symbol. The fork's mtmd_default_marker() still returns
|
||||
# "<__media__>", and Go-side tooling falls back to that sentinel when the
|
||||
# backend does not expose media_marker, so substituting the literal keeps
|
||||
# behavior identical on the turboquant path.
|
||||
#
|
||||
# We patch the *copy* sitting in turboquant-<flavor>-build/, never the original
|
||||
# under backend/cpp/llama-cpp/, so the stock llama-cpp build keeps compiling
|
||||
# against vanilla upstream.
|
||||
#
|
||||
# Idempotent: skips each insertion if its marker is already present (so re-runs
|
||||
# of the same build dir don't double-insert).
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
if [[ $# -ne 1 ]]; then
|
||||
echo "usage: $0 <grpc-server.cpp>" >&2
|
||||
exit 2
|
||||
fi
|
||||
|
||||
SRC=$1
|
||||
|
||||
if [[ ! -f "$SRC" ]]; then
|
||||
echo "grpc-server.cpp not found at $SRC" >&2
|
||||
exit 2
|
||||
fi
|
||||
|
||||
if grep -q 'GGML_TYPE_TURBO2_0' "$SRC"; then
|
||||
echo "==> $SRC already has TurboQuant cache types, skipping KV allow-list patch"
|
||||
else
|
||||
echo "==> patching $SRC to allow turbo2/turbo3/turbo4 KV-cache types"
|
||||
|
||||
# Insert the three TURBO entries right after the first ` GGML_TYPE_Q5_1,`
|
||||
# line (the kv_cache_types[] allow-list). Using awk because the builder image
|
||||
# does not ship python3, and GNU sed's multi-line `a\` quoting is awkward.
|
||||
awk '
|
||||
/^ GGML_TYPE_Q5_1,$/ && !done {
|
||||
print
|
||||
print " // turboquant fork extras — added by patch-grpc-server.sh"
|
||||
print " GGML_TYPE_TURBO2_0,"
|
||||
print " GGML_TYPE_TURBO3_0,"
|
||||
print " GGML_TYPE_TURBO4_0,"
|
||||
done = 1
|
||||
next
|
||||
}
|
||||
{ print }
|
||||
END {
|
||||
if (!done) {
|
||||
print "patch-grpc-server.sh: anchor ` GGML_TYPE_Q5_1,` not found" > "/dev/stderr"
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
' "$SRC" > "$SRC.tmp"
|
||||
mv "$SRC.tmp" "$SRC"
|
||||
|
||||
echo "==> KV allow-list patch OK"
|
||||
fi
|
||||
|
||||
if grep -q 'get_media_marker()' "$SRC"; then
|
||||
echo "==> patching $SRC to replace get_media_marker() with legacy \"<__media__>\" literal"
|
||||
# Only one call site today (ModelMetadata), but replace all occurrences to
|
||||
# stay robust if upstream adds more. Use a temp file to avoid relying on
|
||||
# sed -i portability (the builder image uses GNU sed, but keeping this
|
||||
# consistent with the awk block above).
|
||||
sed 's/get_media_marker()/"<__media__>"/g' "$SRC" > "$SRC.tmp"
|
||||
mv "$SRC.tmp" "$SRC"
|
||||
echo "==> get_media_marker() substitution OK"
|
||||
else
|
||||
echo "==> $SRC has no get_media_marker() call, skipping media-marker patch"
|
||||
fi
|
||||
|
||||
echo "==> all patches applied"
|
||||
65
backend/cpp/turboquant/run.sh
Executable file
65
backend/cpp/turboquant/run.sh
Executable file
|
|
@ -0,0 +1,65 @@
|
|||
#!/bin/bash
|
||||
set -ex
|
||||
|
||||
# Get the absolute current dir where the script is located
|
||||
CURDIR=$(dirname "$(realpath $0)")
|
||||
|
||||
cd /
|
||||
|
||||
echo "CPU info:"
|
||||
grep -e "model\sname" /proc/cpuinfo | head -1
|
||||
grep -e "flags" /proc/cpuinfo | head -1
|
||||
|
||||
BINARY=turboquant-fallback
|
||||
|
||||
if grep -q -e "\savx\s" /proc/cpuinfo ; then
|
||||
echo "CPU: AVX found OK"
|
||||
if [ -e $CURDIR/turboquant-avx ]; then
|
||||
BINARY=turboquant-avx
|
||||
fi
|
||||
fi
|
||||
|
||||
if grep -q -e "\savx2\s" /proc/cpuinfo ; then
|
||||
echo "CPU: AVX2 found OK"
|
||||
if [ -e $CURDIR/turboquant-avx2 ]; then
|
||||
BINARY=turboquant-avx2
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check avx 512
|
||||
if grep -q -e "\savx512f\s" /proc/cpuinfo ; then
|
||||
echo "CPU: AVX512F found OK"
|
||||
if [ -e $CURDIR/turboquant-avx512 ]; then
|
||||
BINARY=turboquant-avx512
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -n "$LLAMACPP_GRPC_SERVERS" ]; then
|
||||
if [ -e $CURDIR/turboquant-grpc ]; then
|
||||
BINARY=turboquant-grpc
|
||||
fi
|
||||
fi
|
||||
|
||||
# Extend ld library path with the dir where this script is located/lib
|
||||
if [ "$(uname)" == "Darwin" ]; then
|
||||
export DYLD_LIBRARY_PATH=$CURDIR/lib:$DYLD_LIBRARY_PATH
|
||||
else
|
||||
export LD_LIBRARY_PATH=$CURDIR/lib:$LD_LIBRARY_PATH
|
||||
# Tell rocBLAS where to find TensileLibrary data (GPU kernel tuning files)
|
||||
if [ -d "$CURDIR/lib/rocblas/library" ]; then
|
||||
export ROCBLAS_TENSILE_LIBPATH=$CURDIR/lib/rocblas/library
|
||||
fi
|
||||
fi
|
||||
|
||||
# If there is a lib/ld.so, use it
|
||||
if [ -f $CURDIR/lib/ld.so ]; then
|
||||
echo "Using lib/ld.so"
|
||||
echo "Using binary: $BINARY"
|
||||
exec $CURDIR/lib/ld.so $CURDIR/$BINARY "$@"
|
||||
fi
|
||||
|
||||
echo "Using binary: $BINARY"
|
||||
exec $CURDIR/$BINARY "$@"
|
||||
|
||||
# We should never reach this point, however just in case we do, run fallback
|
||||
exec $CURDIR/turboquant-fallback "$@"
|
||||
56
backend/go/qwen3-tts-cpp/CMakeLists.txt
Normal file
56
backend/go/qwen3-tts-cpp/CMakeLists.txt
Normal file
|
|
@ -0,0 +1,56 @@
|
|||
cmake_minimum_required(VERSION 3.14)
|
||||
project(goqwen3ttscpp LANGUAGES C CXX)
|
||||
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
|
||||
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
|
||||
|
||||
set(QWEN3TTS_DIR ${CMAKE_CURRENT_SOURCE_DIR}/sources/qwen3-tts.cpp)
|
||||
|
||||
# Override upstream's CMAKE_CUDA_ARCHITECTURES before add_subdirectory.
|
||||
if(NOT DEFINED CMAKE_CUDA_ARCHITECTURES)
|
||||
set(CMAKE_CUDA_ARCHITECTURES "75-virtual;80-virtual;86-real;89-real")
|
||||
endif()
|
||||
|
||||
# Build ggml from the upstream's submodule FIRST, so that ggml/ggml-base/ggml-cpu
|
||||
# CMake targets exist when the upstream project references them by name.
|
||||
# The upstream CMakeLists.txt uses target_link_libraries(... ggml ggml-base ggml-cpu)
|
||||
# with target_link_directories pointing at a pre-built ggml/build/. By adding ggml
|
||||
# as a subdirectory here, CMake resolves those names as targets instead.
|
||||
add_subdirectory(${QWEN3TTS_DIR}/ggml ggml EXCLUDE_FROM_ALL)
|
||||
|
||||
# Now add the upstream project
|
||||
add_subdirectory(${QWEN3TTS_DIR} qwen3tts EXCLUDE_FROM_ALL)
|
||||
|
||||
add_library(goqwen3ttscpp MODULE cpp/goqwen3ttscpp.cpp)
|
||||
target_link_libraries(goqwen3ttscpp PRIVATE qwen3_tts)
|
||||
|
||||
target_include_directories(goqwen3ttscpp PRIVATE ${QWEN3TTS_DIR}/src)
|
||||
target_include_directories(goqwen3ttscpp SYSTEM PRIVATE ${QWEN3TTS_DIR}/ggml/include)
|
||||
|
||||
# Link GPU backends if available
|
||||
foreach(backend blas cuda metal vulkan)
|
||||
if(TARGET ggml-${backend})
|
||||
target_link_libraries(goqwen3ttscpp PRIVATE ggml-${backend})
|
||||
string(TOUPPER ${backend} BACKEND_UPPER)
|
||||
target_compile_definitions(goqwen3ttscpp PRIVATE QWEN3TTS_HAVE_${BACKEND_UPPER})
|
||||
if(backend STREQUAL "cuda")
|
||||
find_package(CUDAToolkit QUIET)
|
||||
if(CUDAToolkit_FOUND)
|
||||
target_link_libraries(goqwen3ttscpp PRIVATE CUDA::cudart)
|
||||
endif()
|
||||
endif()
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
if(MSVC)
|
||||
target_compile_options(goqwen3ttscpp PRIVATE /W4 /wd4100 /wd4505)
|
||||
else()
|
||||
target_compile_options(goqwen3ttscpp PRIVATE -Wall -Wextra -Wshadow -Wconversion
|
||||
-Wno-unused-parameter -Wno-unused-function -Wno-sign-conversion)
|
||||
endif()
|
||||
|
||||
if(CMAKE_CXX_COMPILER_ID MATCHES "GNU" AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 9.0)
|
||||
target_link_libraries(goqwen3ttscpp PRIVATE stdc++fs)
|
||||
endif()
|
||||
|
||||
set_property(TARGET goqwen3ttscpp PROPERTY CXX_STANDARD 17)
|
||||
set_target_properties(goqwen3ttscpp PROPERTIES LIBRARY_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR})
|
||||
126
backend/go/qwen3-tts-cpp/Makefile
Normal file
126
backend/go/qwen3-tts-cpp/Makefile
Normal file
|
|
@ -0,0 +1,126 @@
|
|||
CMAKE_ARGS?=
|
||||
BUILD_TYPE?=
|
||||
NATIVE?=false
|
||||
|
||||
GOCMD?=go
|
||||
GO_TAGS?=
|
||||
JOBS?=$(shell nproc --ignore=1)
|
||||
|
||||
# qwen3-tts.cpp version
|
||||
QWEN3TTS_REPO?=https://github.com/predict-woo/qwen3-tts.cpp
|
||||
QWEN3TTS_CPP_VERSION?=7a762e2ad4bacc6fdda81d81bf10a09ffb546f29
|
||||
SO_TARGET?=libgoqwen3ttscpp.so
|
||||
|
||||
CMAKE_ARGS+=-DBUILD_SHARED_LIBS=OFF
|
||||
|
||||
ifeq ($(NATIVE),false)
|
||||
CMAKE_ARGS+=-DGGML_NATIVE=OFF
|
||||
endif
|
||||
|
||||
ifeq ($(BUILD_TYPE),cublas)
|
||||
CMAKE_ARGS+=-DGGML_CUDA=ON
|
||||
else ifeq ($(BUILD_TYPE),openblas)
|
||||
CMAKE_ARGS+=-DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS
|
||||
else ifeq ($(BUILD_TYPE),clblas)
|
||||
CMAKE_ARGS+=-DGGML_CLBLAST=ON -DCLBlast_DIR=/some/path
|
||||
else ifeq ($(BUILD_TYPE),hipblas)
|
||||
CMAKE_ARGS+=-DGGML_HIPBLAS=ON
|
||||
else ifeq ($(BUILD_TYPE),vulkan)
|
||||
CMAKE_ARGS+=-DGGML_VULKAN=ON
|
||||
else ifeq ($(OS),Darwin)
|
||||
ifneq ($(BUILD_TYPE),metal)
|
||||
CMAKE_ARGS+=-DGGML_METAL=OFF
|
||||
else
|
||||
CMAKE_ARGS+=-DGGML_METAL=ON
|
||||
CMAKE_ARGS+=-DGGML_METAL_EMBED_LIBRARY=ON
|
||||
endif
|
||||
endif
|
||||
|
||||
ifeq ($(BUILD_TYPE),sycl_f16)
|
||||
CMAKE_ARGS+=-DGGML_SYCL=ON \
|
||||
-DCMAKE_C_COMPILER=icx \
|
||||
-DCMAKE_CXX_COMPILER=icpx \
|
||||
-DGGML_SYCL_F16=ON
|
||||
endif
|
||||
|
||||
ifeq ($(BUILD_TYPE),sycl_f32)
|
||||
CMAKE_ARGS+=-DGGML_SYCL=ON \
|
||||
-DCMAKE_C_COMPILER=icx \
|
||||
-DCMAKE_CXX_COMPILER=icpx
|
||||
endif
|
||||
|
||||
sources/qwen3-tts.cpp:
|
||||
mkdir -p sources/qwen3-tts.cpp
|
||||
cd sources/qwen3-tts.cpp && \
|
||||
git init && \
|
||||
git remote add origin $(QWEN3TTS_REPO) && \
|
||||
git fetch origin && \
|
||||
git checkout $(QWEN3TTS_CPP_VERSION) && \
|
||||
git submodule update --init --recursive --depth 1 --single-branch
|
||||
|
||||
# Detect OS
|
||||
UNAME_S := $(shell uname -s)
|
||||
|
||||
# Only build CPU variants on Linux
|
||||
ifeq ($(UNAME_S),Linux)
|
||||
VARIANT_TARGETS = libgoqwen3ttscpp-avx.so libgoqwen3ttscpp-avx2.so libgoqwen3ttscpp-avx512.so libgoqwen3ttscpp-fallback.so
|
||||
else
|
||||
# On non-Linux (e.g., Darwin), build only fallback variant
|
||||
VARIANT_TARGETS = libgoqwen3ttscpp-fallback.so
|
||||
endif
|
||||
|
||||
qwen3-tts-cpp: main.go goqwen3ttscpp.go $(VARIANT_TARGETS)
|
||||
CGO_ENABLED=0 $(GOCMD) build -tags "$(GO_TAGS)" -o qwen3-tts-cpp ./
|
||||
|
||||
package: qwen3-tts-cpp
|
||||
bash package.sh
|
||||
|
||||
build: package
|
||||
|
||||
clean: purge
|
||||
rm -rf libgoqwen3ttscpp*.so package sources/qwen3-tts.cpp qwen3-tts-cpp
|
||||
|
||||
purge:
|
||||
rm -rf build*
|
||||
|
||||
# Variants must build sequentially
|
||||
.NOTPARALLEL:
|
||||
|
||||
# Build all variants (Linux only)
|
||||
ifeq ($(UNAME_S),Linux)
|
||||
libgoqwen3ttscpp-avx.so: sources/qwen3-tts.cpp
|
||||
$(info ${GREEN}I qwen3-tts-cpp build info:avx${RESET})
|
||||
SO_TARGET=libgoqwen3ttscpp-avx.so CMAKE_ARGS="$(CMAKE_ARGS) -DGGML_AVX=on -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off -DGGML_BMI2=off" $(MAKE) libgoqwen3ttscpp-custom
|
||||
rm -rf build-libgoqwen3ttscpp-avx.so
|
||||
|
||||
libgoqwen3ttscpp-avx2.so: sources/qwen3-tts.cpp
|
||||
$(info ${GREEN}I qwen3-tts-cpp build info:avx2${RESET})
|
||||
SO_TARGET=libgoqwen3ttscpp-avx2.so CMAKE_ARGS="$(CMAKE_ARGS) -DGGML_AVX=on -DGGML_AVX2=on -DGGML_AVX512=off -DGGML_FMA=on -DGGML_F16C=on -DGGML_BMI2=on" $(MAKE) libgoqwen3ttscpp-custom
|
||||
rm -rf build-libgoqwen3ttscpp-avx2.so
|
||||
|
||||
libgoqwen3ttscpp-avx512.so: sources/qwen3-tts.cpp
|
||||
$(info ${GREEN}I qwen3-tts-cpp build info:avx512${RESET})
|
||||
SO_TARGET=libgoqwen3ttscpp-avx512.so CMAKE_ARGS="$(CMAKE_ARGS) -DGGML_AVX=on -DGGML_AVX2=on -DGGML_AVX512=on -DGGML_FMA=on -DGGML_F16C=on -DGGML_BMI2=on" $(MAKE) libgoqwen3ttscpp-custom
|
||||
rm -rf build-libgoqwen3ttscpp-avx512.so
|
||||
endif
|
||||
|
||||
# Build fallback variant (all platforms)
|
||||
libgoqwen3ttscpp-fallback.so: sources/qwen3-tts.cpp
|
||||
$(info ${GREEN}I qwen3-tts-cpp build info:fallback${RESET})
|
||||
SO_TARGET=libgoqwen3ttscpp-fallback.so CMAKE_ARGS="$(CMAKE_ARGS) -DGGML_AVX=off -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off -DGGML_BMI2=off" $(MAKE) libgoqwen3ttscpp-custom
|
||||
rm -rf build-libgoqwen3ttscpp-fallback.so
|
||||
|
||||
libgoqwen3ttscpp-custom: CMakeLists.txt cpp/goqwen3ttscpp.cpp cpp/goqwen3ttscpp.h
|
||||
mkdir -p build-$(SO_TARGET) && \
|
||||
cd build-$(SO_TARGET) && \
|
||||
cmake .. $(CMAKE_ARGS) && \
|
||||
cmake --build . --config Release -j$(JOBS) --target goqwen3ttscpp && \
|
||||
cd .. && \
|
||||
mv build-$(SO_TARGET)/libgoqwen3ttscpp.so ./$(SO_TARGET)
|
||||
|
||||
test: qwen3-tts-cpp
|
||||
@echo "Running qwen3-tts-cpp tests..."
|
||||
bash test.sh
|
||||
@echo "qwen3-tts-cpp tests completed."
|
||||
|
||||
all: qwen3-tts-cpp package
|
||||
161
backend/go/qwen3-tts-cpp/cpp/goqwen3ttscpp.cpp
Normal file
161
backend/go/qwen3-tts-cpp/cpp/goqwen3ttscpp.cpp
Normal file
|
|
@ -0,0 +1,161 @@
|
|||
#include "goqwen3ttscpp.h"
|
||||
#include "ggml-backend.h"
|
||||
#include "qwen3_tts.h"
|
||||
|
||||
#include <cmath>
|
||||
#include <cstdio>
|
||||
#include <cstdlib>
|
||||
#include <cstring>
|
||||
#include <string>
|
||||
|
||||
using namespace qwen3_tts;
|
||||
|
||||
// Global engine (loaded once, reused across requests)
|
||||
static Qwen3TTS *g_engine = nullptr;
|
||||
static bool g_loaded = false;
|
||||
static int g_threads = 4;
|
||||
|
||||
static void ggml_log_cb(enum ggml_log_level level, const char *log, void *data) {
|
||||
const char *level_str;
|
||||
if (!log)
|
||||
return;
|
||||
switch (level) {
|
||||
case GGML_LOG_LEVEL_DEBUG:
|
||||
level_str = "DEBUG";
|
||||
break;
|
||||
case GGML_LOG_LEVEL_INFO:
|
||||
level_str = "INFO";
|
||||
break;
|
||||
case GGML_LOG_LEVEL_WARN:
|
||||
level_str = "WARN";
|
||||
break;
|
||||
case GGML_LOG_LEVEL_ERROR:
|
||||
level_str = "ERROR";
|
||||
break;
|
||||
default:
|
||||
level_str = "?????";
|
||||
break;
|
||||
}
|
||||
fprintf(stderr, "[%-5s] ", level_str);
|
||||
fputs(log, stderr);
|
||||
fflush(stderr);
|
||||
}
|
||||
|
||||
// Map language string to language_id token used by the model
|
||||
static int language_to_id(const char *lang) {
|
||||
if (!lang || lang[0] == '\0')
|
||||
return 2050; // default: English
|
||||
std::string l(lang);
|
||||
if (l == "en")
|
||||
return 2050;
|
||||
if (l == "ru")
|
||||
return 2069;
|
||||
if (l == "zh")
|
||||
return 2055;
|
||||
if (l == "ja")
|
||||
return 2058;
|
||||
if (l == "ko")
|
||||
return 2064;
|
||||
if (l == "de")
|
||||
return 2053;
|
||||
if (l == "fr")
|
||||
return 2061;
|
||||
if (l == "es")
|
||||
return 2054;
|
||||
if (l == "it")
|
||||
return 2056;
|
||||
if (l == "pt")
|
||||
return 2057;
|
||||
fprintf(stderr, "[qwen3-tts-cpp] Unknown language '%s', defaulting to English\n",
|
||||
lang);
|
||||
return 2050;
|
||||
}
|
||||
|
||||
int load_model(const char *model_dir, int n_threads) {
|
||||
ggml_log_set(ggml_log_cb, nullptr);
|
||||
ggml_backend_load_all();
|
||||
|
||||
if (n_threads <= 0)
|
||||
n_threads = 4;
|
||||
g_threads = n_threads;
|
||||
|
||||
fprintf(stderr, "[qwen3-tts-cpp] Loading models from %s (threads=%d)\n",
|
||||
model_dir, n_threads);
|
||||
|
||||
g_engine = new Qwen3TTS();
|
||||
if (!g_engine->load_models(model_dir)) {
|
||||
fprintf(stderr, "[qwen3-tts-cpp] FATAL: failed to load models from %s\n",
|
||||
model_dir);
|
||||
delete g_engine;
|
||||
g_engine = nullptr;
|
||||
return 1;
|
||||
}
|
||||
|
||||
g_loaded = true;
|
||||
fprintf(stderr, "[qwen3-tts-cpp] Models loaded successfully\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
int synthesize(const char *text, const char *ref_audio_path, const char *dst,
|
||||
const char *language, float temperature, float top_p,
|
||||
int top_k, float repetition_penalty, int max_audio_tokens,
|
||||
int n_threads) {
|
||||
if (!g_loaded || !g_engine) {
|
||||
fprintf(stderr, "[qwen3-tts-cpp] ERROR: models not loaded\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
if (!text || !dst) {
|
||||
fprintf(stderr, "[qwen3-tts-cpp] ERROR: text and dst are required\n");
|
||||
return 2;
|
||||
}
|
||||
|
||||
tts_params params;
|
||||
params.max_audio_tokens = max_audio_tokens > 0 ? max_audio_tokens : 4096;
|
||||
params.temperature = temperature;
|
||||
params.top_p = top_p;
|
||||
params.top_k = top_k;
|
||||
params.repetition_penalty = repetition_penalty;
|
||||
params.n_threads = n_threads > 0 ? n_threads : g_threads;
|
||||
params.language_id = language_to_id(language);
|
||||
|
||||
fprintf(stderr, "[qwen3-tts-cpp] Synthesizing: text='%.50s%s', lang_id=%d, "
|
||||
"temp=%.2f, threads=%d\n",
|
||||
text, (strlen(text) > 50 ? "..." : ""), params.language_id,
|
||||
temperature, params.n_threads);
|
||||
|
||||
tts_result result;
|
||||
bool has_ref = ref_audio_path && ref_audio_path[0] != '\0';
|
||||
|
||||
if (has_ref) {
|
||||
fprintf(stderr, "[qwen3-tts-cpp] Voice cloning with ref: %s\n",
|
||||
ref_audio_path);
|
||||
result = g_engine->synthesize_with_voice(text, ref_audio_path, params);
|
||||
} else {
|
||||
result = g_engine->synthesize(text, params);
|
||||
}
|
||||
|
||||
if (!result.success) {
|
||||
fprintf(stderr, "[qwen3-tts-cpp] ERROR: synthesis failed: %s\n",
|
||||
result.error_msg.c_str());
|
||||
return 3;
|
||||
}
|
||||
|
||||
int n_samples = (int)result.audio.size();
|
||||
if (n_samples == 0) {
|
||||
fprintf(stderr, "[qwen3-tts-cpp] ERROR: synthesis produced no samples\n");
|
||||
return 4;
|
||||
}
|
||||
|
||||
fprintf(stderr,
|
||||
"[qwen3-tts-cpp] Synthesis done: %d samples (%.2fs @ 24kHz)\n",
|
||||
n_samples, (float)n_samples / 24000.0f);
|
||||
|
||||
if (!save_audio_file(dst, result.audio, result.sample_rate)) {
|
||||
fprintf(stderr, "[qwen3-tts-cpp] ERROR: failed to write %s\n", dst);
|
||||
return 5;
|
||||
}
|
||||
|
||||
fprintf(stderr, "[qwen3-tts-cpp] Wrote %s\n", dst);
|
||||
return 0;
|
||||
}
|
||||
12
backend/go/qwen3-tts-cpp/cpp/goqwen3ttscpp.h
Normal file
12
backend/go/qwen3-tts-cpp/cpp/goqwen3ttscpp.h
Normal file
|
|
@ -0,0 +1,12 @@
|
|||
#pragma once
|
||||
|
||||
#include <cstddef>
|
||||
#include <cstdint>
|
||||
|
||||
extern "C" {
|
||||
int load_model(const char *model_dir, int n_threads);
|
||||
int synthesize(const char *text, const char *ref_audio_path, const char *dst,
|
||||
const char *language, float temperature, float top_p,
|
||||
int top_k, float repetition_penalty, int max_audio_tokens,
|
||||
int n_threads);
|
||||
}
|
||||
74
backend/go/qwen3-tts-cpp/goqwen3ttscpp.go
Normal file
74
backend/go/qwen3-tts-cpp/goqwen3ttscpp.go
Normal file
|
|
@ -0,0 +1,74 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/mudler/LocalAI/pkg/grpc/base"
|
||||
pb "github.com/mudler/LocalAI/pkg/grpc/proto"
|
||||
)
|
||||
|
||||
var (
|
||||
CppLoadModel func(modelDir string, nThreads int) int
|
||||
CppSynthesize func(text, refAudioPath, dst, language string,
|
||||
temperature, topP float32, topK int,
|
||||
repetitionPenalty float32, maxAudioTokens, nThreads int) int
|
||||
)
|
||||
|
||||
type Qwen3TtsCpp struct {
|
||||
base.SingleThread
|
||||
threads int
|
||||
}
|
||||
|
||||
func (q *Qwen3TtsCpp) Load(opts *pb.ModelOptions) error {
|
||||
// ModelFile is the model directory path (containing GGUF files)
|
||||
modelDir := opts.ModelFile
|
||||
if modelDir == "" {
|
||||
modelDir = opts.ModelPath
|
||||
}
|
||||
|
||||
// Resolve relative paths
|
||||
if !filepath.IsAbs(modelDir) && opts.ModelPath != "" {
|
||||
modelDir = filepath.Join(opts.ModelPath, modelDir)
|
||||
}
|
||||
|
||||
threads := int(opts.Threads)
|
||||
if threads <= 0 {
|
||||
threads = 4
|
||||
}
|
||||
q.threads = threads
|
||||
|
||||
fmt.Fprintf(os.Stderr, "[qwen3-tts-cpp] Loading models from: %s (threads=%d)\n", modelDir, threads)
|
||||
|
||||
if ret := CppLoadModel(modelDir, threads); ret != 0 {
|
||||
return fmt.Errorf("failed to load qwen3-tts model (error code: %d)", ret)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (q *Qwen3TtsCpp) TTS(req *pb.TTSRequest) error {
|
||||
text := req.Text
|
||||
voice := req.Voice // reference audio path for voice cloning (empty = no cloning)
|
||||
dst := req.Dst
|
||||
language := ""
|
||||
if req.Language != nil {
|
||||
language = *req.Language
|
||||
}
|
||||
|
||||
// Synthesis parameters with sensible defaults
|
||||
temperature := float32(0.9)
|
||||
topP := float32(0.8)
|
||||
topK := 50
|
||||
repetitionPenalty := float32(1.05)
|
||||
maxAudioTokens := 4096
|
||||
|
||||
if ret := CppSynthesize(text, voice, dst, language,
|
||||
temperature, topP, topK, repetitionPenalty,
|
||||
maxAudioTokens, q.threads); ret != 0 {
|
||||
return fmt.Errorf("failed to synthesize audio (error code: %d)", ret)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
47
backend/go/qwen3-tts-cpp/main.go
Normal file
47
backend/go/qwen3-tts-cpp/main.go
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
package main
|
||||
|
||||
// Note: this is started internally by LocalAI and a server is allocated for each model
|
||||
import (
|
||||
"flag"
|
||||
"os"
|
||||
|
||||
"github.com/ebitengine/purego"
|
||||
grpc "github.com/mudler/LocalAI/pkg/grpc"
|
||||
)
|
||||
|
||||
var (
|
||||
addr = flag.String("addr", "localhost:50051", "the address to connect to")
|
||||
)
|
||||
|
||||
type LibFuncs struct {
|
||||
FuncPtr any
|
||||
Name string
|
||||
}
|
||||
|
||||
func main() {
|
||||
// Get library name from environment variable, default to fallback
|
||||
libName := os.Getenv("QWEN3TTS_LIBRARY")
|
||||
if libName == "" {
|
||||
libName = "./libgoqwen3ttscpp-fallback.so"
|
||||
}
|
||||
|
||||
gosd, err := purego.Dlopen(libName, purego.RTLD_NOW|purego.RTLD_GLOBAL)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
libFuncs := []LibFuncs{
|
||||
{&CppLoadModel, "load_model"},
|
||||
{&CppSynthesize, "synthesize"},
|
||||
}
|
||||
|
||||
for _, lf := range libFuncs {
|
||||
purego.RegisterLibFunc(lf.FuncPtr, gosd, lf.Name)
|
||||
}
|
||||
|
||||
flag.Parse()
|
||||
|
||||
if err := grpc.StartServer(*addr, &Qwen3TtsCpp{}); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
64
backend/go/qwen3-tts-cpp/package.sh
Executable file
64
backend/go/qwen3-tts-cpp/package.sh
Executable file
|
|
@ -0,0 +1,64 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Script to copy the appropriate libraries based on architecture
|
||||
# This script is used in the final stage of the Dockerfile
|
||||
|
||||
set -e
|
||||
|
||||
CURDIR=$(dirname "$(realpath $0)")
|
||||
REPO_ROOT="${CURDIR}/../../.."
|
||||
|
||||
# Create lib directory
|
||||
mkdir -p $CURDIR/package/lib
|
||||
|
||||
cp -avf $CURDIR/qwen3-tts-cpp $CURDIR/package/
|
||||
cp -fv $CURDIR/libgoqwen3ttscpp-*.so $CURDIR/package/
|
||||
cp -fv $CURDIR/run.sh $CURDIR/package/
|
||||
|
||||
# Detect architecture and copy appropriate libraries
|
||||
if [ -f "/lib64/ld-linux-x86-64.so.2" ]; then
|
||||
# x86_64 architecture
|
||||
echo "Detected x86_64 architecture, copying x86_64 libraries..."
|
||||
cp -arfLv /lib64/ld-linux-x86-64.so.2 $CURDIR/package/lib/ld.so
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libc.so.6 $CURDIR/package/lib/libc.so.6
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libgcc_s.so.1 $CURDIR/package/lib/libgcc_s.so.1
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libstdc++.so.6 $CURDIR/package/lib/libstdc++.so.6
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libm.so.6 $CURDIR/package/lib/libm.so.6
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libgomp.so.1 $CURDIR/package/lib/libgomp.so.1
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libgcc_s.so.1 $CURDIR/package/lib/libgcc_s.so.1
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libstdc++.so.6 $CURDIR/package/lib/libstdc++.so.6
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libdl.so.2 $CURDIR/package/lib/libdl.so.2
|
||||
cp -arfLv /lib/x86_64-linux-gnu/librt.so.1 $CURDIR/package/lib/librt.so.1
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libpthread.so.0 $CURDIR/package/lib/libpthread.so.0
|
||||
elif [ -f "/lib/ld-linux-aarch64.so.1" ]; then
|
||||
# ARM64 architecture
|
||||
echo "Detected ARM64 architecture, copying ARM64 libraries..."
|
||||
cp -arfLv /lib/ld-linux-aarch64.so.1 $CURDIR/package/lib/ld.so
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libc.so.6 $CURDIR/package/lib/libc.so.6
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libgcc_s.so.1 $CURDIR/package/lib/libgcc_s.so.1
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libstdc++.so.6 $CURDIR/package/lib/libstdc++.so.6
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libm.so.6 $CURDIR/package/lib/libm.so.6
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libgomp.so.1 $CURDIR/package/lib/libgomp.so.1
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libgcc_s.so.1 $CURDIR/package/lib/libgcc_s.so.1
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libstdc++.so.6 $CURDIR/package/lib/libstdc++.so.6
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libdl.so.2 $CURDIR/package/lib/libdl.so.2
|
||||
cp -arfLv /lib/aarch64-linux-gnu/librt.so.1 $CURDIR/package/lib/librt.so.1
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libpthread.so.0 $CURDIR/package/lib/libpthread.so.0
|
||||
elif [ $(uname -s) = "Darwin" ]; then
|
||||
echo "Detected Darwin"
|
||||
else
|
||||
echo "Error: Could not detect architecture"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Package GPU libraries based on BUILD_TYPE
|
||||
GPU_LIB_SCRIPT="${REPO_ROOT}/scripts/build/package-gpu-libs.sh"
|
||||
if [ -f "$GPU_LIB_SCRIPT" ]; then
|
||||
echo "Packaging GPU libraries for BUILD_TYPE=${BUILD_TYPE:-cpu}..."
|
||||
source "$GPU_LIB_SCRIPT" "$CURDIR/package/lib"
|
||||
package_gpu_libs
|
||||
fi
|
||||
|
||||
echo "Packaging completed successfully"
|
||||
ls -liah $CURDIR/package/
|
||||
ls -liah $CURDIR/package/lib/
|
||||
173
backend/go/qwen3-tts-cpp/qwen3ttscpp_test.go
Normal file
173
backend/go/qwen3-tts-cpp/qwen3ttscpp_test.go
Normal file
|
|
@ -0,0 +1,173 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
pb "github.com/mudler/LocalAI/pkg/grpc/proto"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/credentials/insecure"
|
||||
)
|
||||
|
||||
const (
|
||||
testAddr = "localhost:50051"
|
||||
startupWait = 5 * time.Second
|
||||
)
|
||||
|
||||
func skipIfNoModel(t *testing.T) string {
|
||||
t.Helper()
|
||||
modelDir := os.Getenv("QWEN3TTS_MODEL_DIR")
|
||||
if modelDir == "" {
|
||||
t.Skip("QWEN3TTS_MODEL_DIR not set, skipping test (set to directory with GGUF models)")
|
||||
}
|
||||
if _, err := os.Stat(filepath.Join(modelDir, "qwen3-tts-0.6b-f16.gguf")); os.IsNotExist(err) {
|
||||
t.Skipf("TTS model file not found in %s, skipping", modelDir)
|
||||
}
|
||||
if _, err := os.Stat(filepath.Join(modelDir, "qwen3-tts-tokenizer-f16.gguf")); os.IsNotExist(err) {
|
||||
t.Skipf("Tokenizer model file not found in %s, skipping", modelDir)
|
||||
}
|
||||
return modelDir
|
||||
}
|
||||
|
||||
func startServer(t *testing.T) *exec.Cmd {
|
||||
t.Helper()
|
||||
binary := os.Getenv("QWEN3TTS_BINARY")
|
||||
if binary == "" {
|
||||
binary = "./qwen3-tts-cpp"
|
||||
}
|
||||
if _, err := os.Stat(binary); os.IsNotExist(err) {
|
||||
t.Skipf("Backend binary not found at %s, skipping", binary)
|
||||
}
|
||||
cmd := exec.Command(binary, "--addr", testAddr)
|
||||
cmd.Stdout = os.Stderr
|
||||
cmd.Stderr = os.Stderr
|
||||
if err := cmd.Start(); err != nil {
|
||||
t.Fatalf("Failed to start server: %v", err)
|
||||
}
|
||||
time.Sleep(startupWait)
|
||||
return cmd
|
||||
}
|
||||
|
||||
func stopServer(cmd *exec.Cmd) {
|
||||
if cmd != nil && cmd.Process != nil {
|
||||
cmd.Process.Kill()
|
||||
cmd.Wait()
|
||||
}
|
||||
}
|
||||
|
||||
func dialGRPC(t *testing.T) *grpc.ClientConn {
|
||||
t.Helper()
|
||||
conn, err := grpc.Dial(testAddr,
|
||||
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
grpc.WithDefaultCallOptions(
|
||||
grpc.MaxCallRecvMsgSize(50*1024*1024),
|
||||
grpc.MaxCallSendMsgSize(50*1024*1024),
|
||||
),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to dial gRPC: %v", err)
|
||||
}
|
||||
return conn
|
||||
}
|
||||
|
||||
func TestServerHealth(t *testing.T) {
|
||||
cmd := startServer(t)
|
||||
defer stopServer(cmd)
|
||||
|
||||
conn := dialGRPC(t)
|
||||
defer conn.Close()
|
||||
|
||||
client := pb.NewBackendClient(conn)
|
||||
resp, err := client.Health(context.Background(), &pb.HealthMessage{})
|
||||
if err != nil {
|
||||
t.Fatalf("Health check failed: %v", err)
|
||||
}
|
||||
if string(resp.Message) != "OK" {
|
||||
t.Fatalf("Expected OK, got %s", string(resp.Message))
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoadModel(t *testing.T) {
|
||||
modelDir := skipIfNoModel(t)
|
||||
cmd := startServer(t)
|
||||
defer stopServer(cmd)
|
||||
|
||||
conn := dialGRPC(t)
|
||||
defer conn.Close()
|
||||
|
||||
client := pb.NewBackendClient(conn)
|
||||
|
||||
resp, err := client.LoadModel(context.Background(), &pb.ModelOptions{
|
||||
ModelFile: modelDir,
|
||||
Threads: 4,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("LoadModel failed: %v", err)
|
||||
}
|
||||
if !resp.Success {
|
||||
t.Fatalf("LoadModel returned failure: %s", resp.Message)
|
||||
}
|
||||
}
|
||||
|
||||
func TestTTS(t *testing.T) {
|
||||
modelDir := skipIfNoModel(t)
|
||||
|
||||
tmpDir, err := os.MkdirTemp("", "qwen3tts-test")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
t.Cleanup(func() { os.RemoveAll(tmpDir) })
|
||||
|
||||
outputFile := filepath.Join(tmpDir, "output.wav")
|
||||
|
||||
cmd := startServer(t)
|
||||
defer stopServer(cmd)
|
||||
|
||||
conn := dialGRPC(t)
|
||||
defer conn.Close()
|
||||
|
||||
client := pb.NewBackendClient(conn)
|
||||
|
||||
// Load models
|
||||
loadResp, err := client.LoadModel(context.Background(), &pb.ModelOptions{
|
||||
ModelFile: modelDir,
|
||||
Threads: 4,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("LoadModel failed: %v", err)
|
||||
}
|
||||
if !loadResp.Success {
|
||||
t.Fatalf("LoadModel returned failure: %s", loadResp.Message)
|
||||
}
|
||||
|
||||
// Synthesize speech
|
||||
language := "en"
|
||||
_, err = client.TTS(context.Background(), &pb.TTSRequest{
|
||||
Text: "Hello, this is a test of the Qwen3 text to speech system.",
|
||||
Dst: outputFile,
|
||||
Language: &language,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("TTS failed: %v", err)
|
||||
}
|
||||
|
||||
// Verify output file exists and has content
|
||||
info, err := os.Stat(outputFile)
|
||||
if os.IsNotExist(err) {
|
||||
t.Fatal("Output audio file was not created")
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to stat output file: %v", err)
|
||||
}
|
||||
|
||||
t.Logf("Output file size: %d bytes", info.Size())
|
||||
|
||||
// WAV header is 44 bytes minimum; any real audio should be much larger
|
||||
if info.Size() < 1000 {
|
||||
t.Errorf("Output file too small (%d bytes), expected real audio data", info.Size())
|
||||
}
|
||||
}
|
||||
52
backend/go/qwen3-tts-cpp/run.sh
Executable file
52
backend/go/qwen3-tts-cpp/run.sh
Executable file
|
|
@ -0,0 +1,52 @@
|
|||
#!/bin/bash
|
||||
set -ex
|
||||
|
||||
# Get the absolute current dir where the script is located
|
||||
CURDIR=$(dirname "$(realpath $0)")
|
||||
|
||||
cd /
|
||||
|
||||
echo "CPU info:"
|
||||
if [ "$(uname)" != "Darwin" ]; then
|
||||
grep -e "model\sname" /proc/cpuinfo | head -1
|
||||
grep -e "flags" /proc/cpuinfo | head -1
|
||||
fi
|
||||
|
||||
LIBRARY="$CURDIR/libgoqwen3ttscpp-fallback.so"
|
||||
|
||||
if [ "$(uname)" != "Darwin" ]; then
|
||||
if grep -q -e "\savx\s" /proc/cpuinfo ; then
|
||||
echo "CPU: AVX found OK"
|
||||
if [ -e $CURDIR/libgoqwen3ttscpp-avx.so ]; then
|
||||
LIBRARY="$CURDIR/libgoqwen3ttscpp-avx.so"
|
||||
fi
|
||||
fi
|
||||
|
||||
if grep -q -e "\savx2\s" /proc/cpuinfo ; then
|
||||
echo "CPU: AVX2 found OK"
|
||||
if [ -e $CURDIR/libgoqwen3ttscpp-avx2.so ]; then
|
||||
LIBRARY="$CURDIR/libgoqwen3ttscpp-avx2.so"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check avx 512
|
||||
if grep -q -e "\savx512f\s" /proc/cpuinfo ; then
|
||||
echo "CPU: AVX512F found OK"
|
||||
if [ -e $CURDIR/libgoqwen3ttscpp-avx512.so ]; then
|
||||
LIBRARY="$CURDIR/libgoqwen3ttscpp-avx512.so"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
export LD_LIBRARY_PATH=$CURDIR/lib:$LD_LIBRARY_PATH
|
||||
export QWEN3TTS_LIBRARY=$LIBRARY
|
||||
|
||||
# If there is a lib/ld.so, use it
|
||||
if [ -f $CURDIR/lib/ld.so ]; then
|
||||
echo "Using lib/ld.so"
|
||||
echo "Using library: $LIBRARY"
|
||||
exec $CURDIR/lib/ld.so $CURDIR/qwen3-tts-cpp "$@"
|
||||
fi
|
||||
|
||||
echo "Using library: $LIBRARY"
|
||||
exec $CURDIR/qwen3-tts-cpp "$@"
|
||||
52
backend/go/qwen3-tts-cpp/test.sh
Executable file
52
backend/go/qwen3-tts-cpp/test.sh
Executable file
|
|
@ -0,0 +1,52 @@
|
|||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
CURDIR=$(dirname "$(realpath $0)")
|
||||
|
||||
echo "Running qwen3-tts-cpp backend tests..."
|
||||
|
||||
# The test requires:
|
||||
# - QWEN3TTS_MODEL_DIR: path to directory containing GGUF model files
|
||||
# - QWEN3TTS_BINARY: path to the qwen3-tts-cpp binary (defaults to ./qwen3-tts-cpp)
|
||||
#
|
||||
# Tests that require the model will be skipped if QWEN3TTS_MODEL_DIR is not set
|
||||
# or the directory does not contain the required model files.
|
||||
|
||||
cd "$CURDIR"
|
||||
|
||||
# Only auto-download models when QWEN3TTS_MODEL_DIR is not explicitly set
|
||||
if [ -z "$QWEN3TTS_MODEL_DIR" ]; then
|
||||
export QWEN3TTS_MODEL_DIR="./qwen3-tts-models"
|
||||
|
||||
if [ ! -d "$QWEN3TTS_MODEL_DIR" ]; then
|
||||
echo "Creating qwen3-tts-models directory for tests..."
|
||||
mkdir -p "$QWEN3TTS_MODEL_DIR"
|
||||
REPO_ID="endo5501/qwen3-tts.cpp"
|
||||
echo "Repository: ${REPO_ID}"
|
||||
echo ""
|
||||
|
||||
# Files to download (smallest model for testing)
|
||||
FILES=(
|
||||
"qwen3-tts-0.6b-f16.gguf"
|
||||
"qwen3-tts-tokenizer-f16.gguf"
|
||||
)
|
||||
|
||||
BASE_URL="https://huggingface.co/${REPO_ID}/resolve/main"
|
||||
|
||||
for file in "${FILES[@]}"; do
|
||||
dest="${QWEN3TTS_MODEL_DIR}/${file}"
|
||||
if [ -f "${dest}" ]; then
|
||||
echo " [skip] ${file} (already exists)"
|
||||
else
|
||||
echo " [download] ${file}..."
|
||||
curl -L -o "${dest}" "${BASE_URL}/${file}" --progress-bar
|
||||
echo " [done] ${file}"
|
||||
fi
|
||||
done
|
||||
fi
|
||||
fi
|
||||
|
||||
# Run Go tests
|
||||
go test -v -timeout 600s .
|
||||
|
||||
echo "All qwen3-tts-cpp tests passed."
|
||||
7
backend/go/sam3-cpp/.gitignore
vendored
Normal file
7
backend/go/sam3-cpp/.gitignore
vendored
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
sources/
|
||||
build*/
|
||||
package/
|
||||
libgosam3*.so
|
||||
sam3-cpp
|
||||
test-models/
|
||||
test-data/
|
||||
26
backend/go/sam3-cpp/CMakeLists.txt
Normal file
26
backend/go/sam3-cpp/CMakeLists.txt
Normal file
|
|
@ -0,0 +1,26 @@
|
|||
cmake_minimum_required(VERSION 3.14)
|
||||
project(gosam3 LANGUAGES C CXX)
|
||||
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
|
||||
|
||||
# Build ggml as static libraries to avoid runtime .so dependencies
|
||||
set(BUILD_SHARED_LIBS OFF CACHE BOOL "Build static libraries" FORCE)
|
||||
|
||||
set(SAM3_BUILD_EXAMPLES OFF CACHE BOOL "Disable sam3.cpp examples" FORCE)
|
||||
set(SAM3_BUILD_TESTS OFF CACHE BOOL "Disable sam3.cpp tests" FORCE)
|
||||
|
||||
add_subdirectory(./sources/sam3.cpp)
|
||||
|
||||
add_library(gosam3 MODULE gosam3.cpp)
|
||||
target_link_libraries(gosam3 PRIVATE sam3 ggml)
|
||||
|
||||
if(CMAKE_CXX_COMPILER_ID MATCHES "GNU" AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 9.0)
|
||||
target_link_libraries(gosam3 PRIVATE stdc++fs)
|
||||
endif()
|
||||
|
||||
target_include_directories(gosam3 PUBLIC
|
||||
sources/sam3.cpp
|
||||
sources/sam3.cpp/ggml/include
|
||||
)
|
||||
|
||||
set_property(TARGET gosam3 PROPERTY CXX_STANDARD 14)
|
||||
set_target_properties(gosam3 PROPERTIES LIBRARY_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR})
|
||||
122
backend/go/sam3-cpp/Makefile
Normal file
122
backend/go/sam3-cpp/Makefile
Normal file
|
|
@ -0,0 +1,122 @@
|
|||
CMAKE_ARGS?=
|
||||
BUILD_TYPE?=
|
||||
NATIVE?=false
|
||||
|
||||
GOCMD?=go
|
||||
GO_TAGS?=
|
||||
JOBS?=$(shell nproc --ignore=1)
|
||||
|
||||
# sam3.cpp
|
||||
SAM3_REPO?=https://github.com/PABannier/sam3.cpp
|
||||
SAM3_VERSION?=01832ef85fcc8eb6488f1d01cd247f07e96ff5a9
|
||||
|
||||
ifeq ($(NATIVE),false)
|
||||
CMAKE_ARGS+=-DGGML_NATIVE=OFF
|
||||
endif
|
||||
|
||||
# If build type is cublas, then we set -DGGML_CUDA=ON to CMAKE_ARGS automatically
|
||||
ifeq ($(BUILD_TYPE),cublas)
|
||||
CMAKE_ARGS+=-DGGML_CUDA=ON
|
||||
else ifeq ($(BUILD_TYPE),openblas)
|
||||
CMAKE_ARGS+=-DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS
|
||||
else ifeq ($(BUILD_TYPE),clblas)
|
||||
CMAKE_ARGS+=-DGGML_CLBLAST=ON
|
||||
else ifeq ($(BUILD_TYPE),hipblas)
|
||||
ROCM_HOME ?= /opt/rocm
|
||||
ROCM_PATH ?= /opt/rocm
|
||||
export CXX=$(ROCM_HOME)/llvm/bin/clang++
|
||||
export CC=$(ROCM_HOME)/llvm/bin/clang
|
||||
AMDGPU_TARGETS?=gfx908,gfx90a,gfx942,gfx950,gfx1030,gfx1100,gfx1101,gfx1102,gfx1200,gfx1201
|
||||
CMAKE_ARGS+=-DGGML_HIPBLAS=ON -DAMDGPU_TARGETS=$(AMDGPU_TARGETS)
|
||||
else ifeq ($(BUILD_TYPE),vulkan)
|
||||
CMAKE_ARGS+=-DGGML_VULKAN=ON
|
||||
else ifeq ($(OS),Darwin)
|
||||
ifneq ($(BUILD_TYPE),metal)
|
||||
CMAKE_ARGS+=-DGGML_METAL=OFF
|
||||
else
|
||||
CMAKE_ARGS+=-DGGML_METAL=ON
|
||||
CMAKE_ARGS+=-DGGML_METAL_EMBED_LIBRARY=ON
|
||||
endif
|
||||
endif
|
||||
|
||||
ifeq ($(BUILD_TYPE),sycl_f16)
|
||||
CMAKE_ARGS+=-DGGML_SYCL=ON \
|
||||
-DCMAKE_C_COMPILER=icx \
|
||||
-DCMAKE_CXX_COMPILER=icpx \
|
||||
-DGGML_SYCL_F16=ON
|
||||
endif
|
||||
|
||||
ifeq ($(BUILD_TYPE),sycl_f32)
|
||||
CMAKE_ARGS+=-DGGML_SYCL=ON \
|
||||
-DCMAKE_C_COMPILER=icx \
|
||||
-DCMAKE_CXX_COMPILER=icpx
|
||||
endif
|
||||
|
||||
sources/sam3.cpp:
|
||||
git clone --recursive $(SAM3_REPO) sources/sam3.cpp && \
|
||||
cd sources/sam3.cpp && \
|
||||
git checkout $(SAM3_VERSION) && \
|
||||
git submodule update --init --recursive --depth 1 --single-branch
|
||||
|
||||
# Detect OS
|
||||
UNAME_S := $(shell uname -s)
|
||||
|
||||
# Only build CPU variants on Linux
|
||||
ifeq ($(UNAME_S),Linux)
|
||||
VARIANT_TARGETS = libgosam3-avx.so libgosam3-avx2.so libgosam3-avx512.so libgosam3-fallback.so
|
||||
else
|
||||
# On non-Linux (e.g., Darwin), build only fallback variant
|
||||
VARIANT_TARGETS = libgosam3-fallback.so
|
||||
endif
|
||||
|
||||
sam3-cpp: main.go gosam3.go $(VARIANT_TARGETS)
|
||||
CGO_ENABLED=0 $(GOCMD) build -tags "$(GO_TAGS)" -o sam3-cpp ./
|
||||
|
||||
package: sam3-cpp
|
||||
bash package.sh
|
||||
|
||||
build: package
|
||||
|
||||
clean: purge
|
||||
rm -rf libgosam3*.so sam3-cpp package sources
|
||||
|
||||
purge:
|
||||
rm -rf build*
|
||||
|
||||
# Build all variants (Linux only)
|
||||
ifeq ($(UNAME_S),Linux)
|
||||
libgosam3-avx.so: sources/sam3.cpp
|
||||
$(MAKE) purge
|
||||
$(info ${GREEN}I sam3-cpp build info:avx${RESET})
|
||||
SO_TARGET=libgosam3-avx.so CMAKE_ARGS="$(CMAKE_ARGS) -DGGML_AVX=on -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off -DGGML_BMI2=off" $(MAKE) libgosam3-custom
|
||||
rm -rfv build*
|
||||
|
||||
libgosam3-avx2.so: sources/sam3.cpp
|
||||
$(MAKE) purge
|
||||
$(info ${GREEN}I sam3-cpp build info:avx2${RESET})
|
||||
SO_TARGET=libgosam3-avx2.so CMAKE_ARGS="$(CMAKE_ARGS) -DGGML_AVX=on -DGGML_AVX2=on -DGGML_AVX512=off -DGGML_FMA=on -DGGML_F16C=on -DGGML_BMI2=on" $(MAKE) libgosam3-custom
|
||||
rm -rfv build*
|
||||
|
||||
libgosam3-avx512.so: sources/sam3.cpp
|
||||
$(MAKE) purge
|
||||
$(info ${GREEN}I sam3-cpp build info:avx512${RESET})
|
||||
SO_TARGET=libgosam3-avx512.so CMAKE_ARGS="$(CMAKE_ARGS) -DGGML_AVX=on -DGGML_AVX2=on -DGGML_AVX512=on -DGGML_FMA=on -DGGML_F16C=on -DGGML_BMI2=on" $(MAKE) libgosam3-custom
|
||||
rm -rfv build*
|
||||
endif
|
||||
|
||||
# Build fallback variant (all platforms)
|
||||
libgosam3-fallback.so: sources/sam3.cpp
|
||||
$(MAKE) purge
|
||||
$(info ${GREEN}I sam3-cpp build info:fallback${RESET})
|
||||
SO_TARGET=libgosam3-fallback.so CMAKE_ARGS="$(CMAKE_ARGS) -DGGML_AVX=off -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off -DGGML_BMI2=off" $(MAKE) libgosam3-custom
|
||||
rm -rfv build*
|
||||
|
||||
libgosam3-custom: CMakeLists.txt gosam3.cpp gosam3.h
|
||||
mkdir -p build-$(SO_TARGET) && \
|
||||
cd build-$(SO_TARGET) && \
|
||||
cmake .. $(CMAKE_ARGS) && \
|
||||
cmake --build . --config Release -j$(JOBS) && \
|
||||
cd .. && \
|
||||
mv build-$(SO_TARGET)/libgosam3.so ./$(SO_TARGET)
|
||||
|
||||
all: sam3-cpp package
|
||||
193
backend/go/sam3-cpp/gosam3.cpp
Normal file
193
backend/go/sam3-cpp/gosam3.cpp
Normal file
|
|
@ -0,0 +1,193 @@
|
|||
#include "sam3.h"
|
||||
#include "gosam3.h"
|
||||
|
||||
#include <cstdio>
|
||||
#include <cstring>
|
||||
#include <memory>
|
||||
#include <vector>
|
||||
|
||||
#define STB_IMAGE_WRITE_IMPLEMENTATION
|
||||
#define STB_IMAGE_WRITE_STATIC
|
||||
#include "stb_image_write.h"
|
||||
|
||||
// Static state
|
||||
static std::shared_ptr<sam3_model> g_model;
|
||||
static sam3_state_ptr g_state;
|
||||
static sam3_result g_result;
|
||||
static std::vector<std::vector<unsigned char>> g_mask_pngs;
|
||||
|
||||
// Callback for stbi_write_png_to_mem via stbi_write_png_to_func
|
||||
static void png_write_callback(void *context, void *data, int size) {
|
||||
auto *buf = static_cast<std::vector<unsigned char>*>(context);
|
||||
auto *bytes = static_cast<unsigned char*>(data);
|
||||
buf->insert(buf->end(), bytes, bytes + size);
|
||||
}
|
||||
|
||||
// Encode all masks as PNGs after segmentation
|
||||
static void encode_masks_as_png() {
|
||||
g_mask_pngs.clear();
|
||||
g_mask_pngs.resize(g_result.detections.size());
|
||||
|
||||
for (size_t i = 0; i < g_result.detections.size(); i++) {
|
||||
const auto &mask = g_result.detections[i].mask;
|
||||
if (mask.width > 0 && mask.height > 0 && !mask.data.empty()) {
|
||||
stbi_write_png_to_func(png_write_callback, &g_mask_pngs[i],
|
||||
mask.width, mask.height, 1,
|
||||
mask.data.data(), mask.width);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
extern "C" {
|
||||
|
||||
int sam3_cpp_load_model(const char *model_path, int threads) {
|
||||
sam3_params params;
|
||||
params.model_path = model_path;
|
||||
params.n_threads = threads;
|
||||
params.use_gpu = true;
|
||||
|
||||
g_model = sam3_load_model(params);
|
||||
if (!g_model) {
|
||||
fprintf(stderr, "[sam3-cpp] Failed to load model: %s\n", model_path);
|
||||
return 1;
|
||||
}
|
||||
|
||||
g_state = sam3_create_state(*g_model, params);
|
||||
if (!g_state) {
|
||||
fprintf(stderr, "[sam3-cpp] Failed to create state\n");
|
||||
g_model.reset();
|
||||
return 2;
|
||||
}
|
||||
|
||||
fprintf(stderr, "[sam3-cpp] Model loaded: %s (threads=%d)\n", model_path, threads);
|
||||
return 0;
|
||||
}
|
||||
|
||||
int sam3_cpp_encode_image(const char *image_path) {
|
||||
if (!g_model || !g_state) {
|
||||
fprintf(stderr, "[sam3-cpp] Model not loaded\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
sam3_image img = sam3_load_image(image_path);
|
||||
if (img.data.empty()) {
|
||||
fprintf(stderr, "[sam3-cpp] Failed to load image: %s\n", image_path);
|
||||
return 2;
|
||||
}
|
||||
|
||||
if (!sam3_encode_image(*g_state, *g_model, img)) {
|
||||
fprintf(stderr, "[sam3-cpp] Failed to encode image\n");
|
||||
return 3;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int sam3_cpp_segment_pvs(float *points, int n_point_triples,
|
||||
float *boxes, int n_box_quads,
|
||||
float threshold) {
|
||||
if (!g_model || !g_state) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
sam3_pvs_params pvs_params;
|
||||
|
||||
// Parse points: each triple is [x, y, label]
|
||||
for (int i = 0; i < n_point_triples; i++) {
|
||||
float x = points[i * 3];
|
||||
float y = points[i * 3 + 1];
|
||||
float label = points[i * 3 + 2];
|
||||
sam3_point pt = {x, y};
|
||||
if (label > 0.5f) {
|
||||
pvs_params.pos_points.push_back(pt);
|
||||
} else {
|
||||
pvs_params.neg_points.push_back(pt);
|
||||
}
|
||||
}
|
||||
|
||||
// Parse boxes: each quad is [x1, y1, x2, y2], use only first box
|
||||
if (n_box_quads > 0) {
|
||||
pvs_params.box = {boxes[0], boxes[1], boxes[2], boxes[3]};
|
||||
pvs_params.use_box = true;
|
||||
}
|
||||
|
||||
g_result = sam3_segment_pvs(*g_state, *g_model, pvs_params);
|
||||
encode_masks_as_png();
|
||||
|
||||
return static_cast<int>(g_result.detections.size());
|
||||
}
|
||||
|
||||
int sam3_cpp_segment_pcs(const char *text_prompt, float threshold) {
|
||||
if (!g_model || !g_state) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
// PCS mode requires SAM 3 (full model with text encoder)
|
||||
if (sam3_is_visual_only(*g_model) ||
|
||||
sam3_get_model_type(*g_model) != SAM3_MODEL_SAM3) {
|
||||
fprintf(stderr, "[sam3-cpp] PCS mode requires full SAM 3 model\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
sam3_pcs_params pcs_params;
|
||||
pcs_params.text_prompt = text_prompt;
|
||||
pcs_params.score_threshold = threshold > 0 ? threshold : 0.5f;
|
||||
|
||||
g_result = sam3_segment_pcs(*g_state, *g_model, pcs_params);
|
||||
encode_masks_as_png();
|
||||
|
||||
return static_cast<int>(g_result.detections.size());
|
||||
}
|
||||
|
||||
int sam3_cpp_get_n_detections(void) {
|
||||
return static_cast<int>(g_result.detections.size());
|
||||
}
|
||||
|
||||
float sam3_cpp_get_detection_x(int i) {
|
||||
if (i < 0 || i >= static_cast<int>(g_result.detections.size())) return 0;
|
||||
return g_result.detections[i].box.x0;
|
||||
}
|
||||
|
||||
float sam3_cpp_get_detection_y(int i) {
|
||||
if (i < 0 || i >= static_cast<int>(g_result.detections.size())) return 0;
|
||||
return g_result.detections[i].box.y0;
|
||||
}
|
||||
|
||||
float sam3_cpp_get_detection_w(int i) {
|
||||
if (i < 0 || i >= static_cast<int>(g_result.detections.size())) return 0;
|
||||
const auto &box = g_result.detections[i].box;
|
||||
return box.x1 - box.x0;
|
||||
}
|
||||
|
||||
float sam3_cpp_get_detection_h(int i) {
|
||||
if (i < 0 || i >= static_cast<int>(g_result.detections.size())) return 0;
|
||||
const auto &box = g_result.detections[i].box;
|
||||
return box.y1 - box.y0;
|
||||
}
|
||||
|
||||
float sam3_cpp_get_detection_score(int i) {
|
||||
if (i < 0 || i >= static_cast<int>(g_result.detections.size())) return 0;
|
||||
return g_result.detections[i].score;
|
||||
}
|
||||
|
||||
int sam3_cpp_get_detection_mask_png(int i, unsigned char *buf, int buf_size) {
|
||||
if (i < 0 || i >= static_cast<int>(g_mask_pngs.size())) return 0;
|
||||
|
||||
const auto &png = g_mask_pngs[i];
|
||||
int size = static_cast<int>(png.size());
|
||||
|
||||
if (buf == nullptr) {
|
||||
return size;
|
||||
}
|
||||
|
||||
int to_copy = size < buf_size ? size : buf_size;
|
||||
memcpy(buf, png.data(), to_copy);
|
||||
return to_copy;
|
||||
}
|
||||
|
||||
void sam3_cpp_free_results(void) {
|
||||
g_result.detections.clear();
|
||||
g_mask_pngs.clear();
|
||||
}
|
||||
|
||||
} // extern "C"
|
||||
143
backend/go/sam3-cpp/gosam3.go
Normal file
143
backend/go/sam3-cpp/gosam3.go
Normal file
|
|
@ -0,0 +1,143 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"encoding/base64"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"unsafe"
|
||||
|
||||
"github.com/mudler/LocalAI/pkg/grpc/base"
|
||||
pb "github.com/mudler/LocalAI/pkg/grpc/proto"
|
||||
)
|
||||
|
||||
type SAM3 struct {
|
||||
base.SingleThread
|
||||
}
|
||||
|
||||
var (
|
||||
CppLoadModel func(modelPath string, threads int) int
|
||||
CppEncodeImage func(imagePath string) int
|
||||
CppSegmentPVS func(points uintptr, nPointTriples int, boxes uintptr, nBoxQuads int, threshold float32) int
|
||||
CppSegmentPCS func(textPrompt string, threshold float32) int
|
||||
CppGetNDetections func() int
|
||||
CppGetDetectionX func(i int) float32
|
||||
CppGetDetectionY func(i int) float32
|
||||
CppGetDetectionW func(i int) float32
|
||||
CppGetDetectionH func(i int) float32
|
||||
CppGetDetectionScore func(i int) float32
|
||||
CppGetDetectionMaskPNG func(i int, buf uintptr, bufSize int) int
|
||||
CppFreeResults func()
|
||||
)
|
||||
|
||||
func (s *SAM3) Load(opts *pb.ModelOptions) error {
|
||||
modelFile := opts.ModelFile
|
||||
if modelFile == "" {
|
||||
modelFile = opts.Model
|
||||
}
|
||||
|
||||
var modelPath string
|
||||
if filepath.IsAbs(modelFile) {
|
||||
modelPath = modelFile
|
||||
} else {
|
||||
modelPath = filepath.Join(opts.ModelPath, modelFile)
|
||||
}
|
||||
|
||||
threads := int(opts.Threads)
|
||||
if threads <= 0 {
|
||||
threads = 4
|
||||
}
|
||||
|
||||
ret := CppLoadModel(modelPath, threads)
|
||||
if ret != 0 {
|
||||
return fmt.Errorf("failed to load SAM3 model (error %d): %s", ret, modelPath)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *SAM3) Detect(opts *pb.DetectOptions) (pb.DetectResponse, error) {
|
||||
// Decode base64 image and write to temp file
|
||||
imgData, err := base64.StdEncoding.DecodeString(opts.Src)
|
||||
if err != nil {
|
||||
return pb.DetectResponse{}, fmt.Errorf("failed to decode image: %w", err)
|
||||
}
|
||||
|
||||
tmpFile, err := os.CreateTemp("", "sam3-*.png")
|
||||
if err != nil {
|
||||
return pb.DetectResponse{}, fmt.Errorf("failed to create temp file: %w", err)
|
||||
}
|
||||
defer os.Remove(tmpFile.Name())
|
||||
|
||||
if _, err := tmpFile.Write(imgData); err != nil {
|
||||
tmpFile.Close()
|
||||
return pb.DetectResponse{}, fmt.Errorf("failed to write temp file: %w", err)
|
||||
}
|
||||
tmpFile.Close()
|
||||
|
||||
// Encode image
|
||||
ret := CppEncodeImage(tmpFile.Name())
|
||||
if ret != 0 {
|
||||
return pb.DetectResponse{}, fmt.Errorf("failed to encode image (error %d)", ret)
|
||||
}
|
||||
|
||||
threshold := opts.Threshold
|
||||
if threshold <= 0 {
|
||||
threshold = 0.5
|
||||
}
|
||||
|
||||
// Determine segmentation mode
|
||||
var nDetections int
|
||||
if opts.Prompt != "" {
|
||||
// Text-prompted segmentation (PCS mode, SAM 3 only)
|
||||
nDetections = CppSegmentPCS(opts.Prompt, threshold)
|
||||
} else {
|
||||
// Point/box-prompted segmentation (PVS mode)
|
||||
var pointsPtr uintptr
|
||||
var boxesPtr uintptr
|
||||
nPointTriples := len(opts.Points) / 3
|
||||
nBoxQuads := len(opts.Boxes) / 4
|
||||
|
||||
if nPointTriples > 0 {
|
||||
pointsPtr = uintptr(unsafe.Pointer(&opts.Points[0]))
|
||||
}
|
||||
if nBoxQuads > 0 {
|
||||
boxesPtr = uintptr(unsafe.Pointer(&opts.Boxes[0]))
|
||||
}
|
||||
|
||||
nDetections = CppSegmentPVS(pointsPtr, nPointTriples, boxesPtr, nBoxQuads, threshold)
|
||||
}
|
||||
|
||||
if nDetections < 0 {
|
||||
return pb.DetectResponse{}, fmt.Errorf("segmentation failed")
|
||||
}
|
||||
|
||||
defer CppFreeResults()
|
||||
|
||||
// Build response
|
||||
detections := make([]*pb.Detection, nDetections)
|
||||
for i := 0; i < nDetections; i++ {
|
||||
det := &pb.Detection{
|
||||
X: CppGetDetectionX(i),
|
||||
Y: CppGetDetectionY(i),
|
||||
Width: CppGetDetectionW(i),
|
||||
Height: CppGetDetectionH(i),
|
||||
Confidence: CppGetDetectionScore(i),
|
||||
ClassName: "segment",
|
||||
}
|
||||
|
||||
// Get mask PNG
|
||||
maskSize := CppGetDetectionMaskPNG(i, 0, 0)
|
||||
if maskSize > 0 {
|
||||
maskBuf := make([]byte, maskSize)
|
||||
CppGetDetectionMaskPNG(i, uintptr(unsafe.Pointer(&maskBuf[0])), maskSize)
|
||||
det.Mask = maskBuf
|
||||
}
|
||||
|
||||
detections[i] = det
|
||||
}
|
||||
|
||||
return pb.DetectResponse{
|
||||
Detections: detections,
|
||||
}, nil
|
||||
}
|
||||
51
backend/go/sam3-cpp/gosam3.h
Normal file
51
backend/go/sam3-cpp/gosam3.h
Normal file
|
|
@ -0,0 +1,51 @@
|
|||
#ifndef GOSAM3_H
|
||||
#define GOSAM3_H
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
// Load model from file. Returns 0 on success, non-zero on failure.
|
||||
int sam3_cpp_load_model(const char *model_path, int threads);
|
||||
|
||||
// Encode an image from file path. Must be called before segmentation.
|
||||
// Returns 0 on success.
|
||||
int sam3_cpp_encode_image(const char *image_path);
|
||||
|
||||
// Segment with point/box prompts (PVS mode).
|
||||
// points: flat array of [x, y, label] triples (label: 1=positive, 0=negative)
|
||||
// boxes: flat array of [x1, y1, x2, y2] quads
|
||||
// Returns number of detections, or -1 on error.
|
||||
int sam3_cpp_segment_pvs(float *points, int n_point_triples,
|
||||
float *boxes, int n_box_quads,
|
||||
float threshold);
|
||||
|
||||
// Segment with text prompt (PCS mode, SAM 3 only).
|
||||
// Returns number of detections, or -1 on error.
|
||||
int sam3_cpp_segment_pcs(const char *text_prompt, float threshold);
|
||||
|
||||
// Access detection results (valid after a segment call).
|
||||
int sam3_cpp_get_n_detections(void);
|
||||
|
||||
// Get bounding box for detection i (as x, y, width, height).
|
||||
float sam3_cpp_get_detection_x(int i);
|
||||
float sam3_cpp_get_detection_y(int i);
|
||||
float sam3_cpp_get_detection_w(int i);
|
||||
float sam3_cpp_get_detection_h(int i);
|
||||
|
||||
// Get confidence score for detection i.
|
||||
float sam3_cpp_get_detection_score(int i);
|
||||
|
||||
// Get mask as PNG-encoded bytes.
|
||||
// If buf is NULL, returns the required buffer size.
|
||||
// Otherwise writes up to buf_size bytes and returns bytes written.
|
||||
int sam3_cpp_get_detection_mask_png(int i, unsigned char *buf, int buf_size);
|
||||
|
||||
// Free current detection results.
|
||||
void sam3_cpp_free_results(void);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif // GOSAM3_H
|
||||
56
backend/go/sam3-cpp/main.go
Normal file
56
backend/go/sam3-cpp/main.go
Normal file
|
|
@ -0,0 +1,56 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"os"
|
||||
|
||||
"github.com/ebitengine/purego"
|
||||
grpc "github.com/mudler/LocalAI/pkg/grpc"
|
||||
)
|
||||
|
||||
var (
|
||||
addr = flag.String("addr", "localhost:50051", "the address to connect to")
|
||||
)
|
||||
|
||||
type LibFuncs struct {
|
||||
FuncPtr any
|
||||
Name string
|
||||
}
|
||||
|
||||
func main() {
|
||||
// Get library name from environment variable, default to fallback
|
||||
libName := os.Getenv("SAM3_LIBRARY")
|
||||
if libName == "" {
|
||||
libName = "./libgosam3-fallback.so"
|
||||
}
|
||||
|
||||
gosamLib, err := purego.Dlopen(libName, purego.RTLD_NOW|purego.RTLD_GLOBAL)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
libFuncs := []LibFuncs{
|
||||
{&CppLoadModel, "sam3_cpp_load_model"},
|
||||
{&CppEncodeImage, "sam3_cpp_encode_image"},
|
||||
{&CppSegmentPVS, "sam3_cpp_segment_pvs"},
|
||||
{&CppSegmentPCS, "sam3_cpp_segment_pcs"},
|
||||
{&CppGetNDetections, "sam3_cpp_get_n_detections"},
|
||||
{&CppGetDetectionX, "sam3_cpp_get_detection_x"},
|
||||
{&CppGetDetectionY, "sam3_cpp_get_detection_y"},
|
||||
{&CppGetDetectionW, "sam3_cpp_get_detection_w"},
|
||||
{&CppGetDetectionH, "sam3_cpp_get_detection_h"},
|
||||
{&CppGetDetectionScore, "sam3_cpp_get_detection_score"},
|
||||
{&CppGetDetectionMaskPNG, "sam3_cpp_get_detection_mask_png"},
|
||||
{&CppFreeResults, "sam3_cpp_free_results"},
|
||||
}
|
||||
|
||||
for _, lf := range libFuncs {
|
||||
purego.RegisterLibFunc(lf.FuncPtr, gosamLib, lf.Name)
|
||||
}
|
||||
|
||||
flag.Parse()
|
||||
|
||||
if err := grpc.StartServer(*addr, &SAM3{}); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
59
backend/go/sam3-cpp/package.sh
Executable file
59
backend/go/sam3-cpp/package.sh
Executable file
|
|
@ -0,0 +1,59 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Script to copy the appropriate libraries based on architecture
|
||||
|
||||
set -e
|
||||
|
||||
CURDIR=$(dirname "$(realpath $0)")
|
||||
REPO_ROOT="${CURDIR}/../../.."
|
||||
|
||||
# Create lib directory
|
||||
mkdir -p $CURDIR/package/lib
|
||||
|
||||
cp -avf $CURDIR/libgosam3-*.so $CURDIR/package/
|
||||
cp -avf $CURDIR/sam3-cpp $CURDIR/package/
|
||||
cp -fv $CURDIR/run.sh $CURDIR/package/
|
||||
|
||||
# Detect architecture and copy appropriate libraries
|
||||
if [ -f "/lib64/ld-linux-x86-64.so.2" ]; then
|
||||
# x86_64 architecture
|
||||
echo "Detected x86_64 architecture, copying x86_64 libraries..."
|
||||
cp -arfLv /lib64/ld-linux-x86-64.so.2 $CURDIR/package/lib/ld.so
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libc.so.6 $CURDIR/package/lib/libc.so.6
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libgcc_s.so.1 $CURDIR/package/lib/libgcc_s.so.1
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libstdc++.so.6 $CURDIR/package/lib/libstdc++.so.6
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libm.so.6 $CURDIR/package/lib/libm.so.6
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libgomp.so.1 $CURDIR/package/lib/libgomp.so.1
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libdl.so.2 $CURDIR/package/lib/libdl.so.2
|
||||
cp -arfLv /lib/x86_64-linux-gnu/librt.so.1 $CURDIR/package/lib/librt.so.1
|
||||
cp -arfLv /lib/x86_64-linux-gnu/libpthread.so.0 $CURDIR/package/lib/libpthread.so.0
|
||||
elif [ -f "/lib/ld-linux-aarch64.so.1" ]; then
|
||||
# ARM64 architecture
|
||||
echo "Detected ARM64 architecture, copying ARM64 libraries..."
|
||||
cp -arfLv /lib/ld-linux-aarch64.so.1 $CURDIR/package/lib/ld.so
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libc.so.6 $CURDIR/package/lib/libc.so.6
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libgcc_s.so.1 $CURDIR/package/lib/libgcc_s.so.1
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libstdc++.so.6 $CURDIR/package/lib/libstdc++.so.6
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libm.so.6 $CURDIR/package/lib/libm.so.6
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libgomp.so.1 $CURDIR/package/lib/libgomp.so.1
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libdl.so.2 $CURDIR/package/lib/libdl.so.2
|
||||
cp -arfLv /lib/aarch64-linux-gnu/librt.so.1 $CURDIR/package/lib/librt.so.1
|
||||
cp -arfLv /lib/aarch64-linux-gnu/libpthread.so.0 $CURDIR/package/lib/libpthread.so.0
|
||||
elif [ $(uname -s) = "Darwin" ]; then
|
||||
echo "Detected Darwin"
|
||||
else
|
||||
echo "Error: Could not detect architecture"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Package GPU libraries based on BUILD_TYPE
|
||||
GPU_LIB_SCRIPT="${REPO_ROOT}/scripts/build/package-gpu-libs.sh"
|
||||
if [ -f "$GPU_LIB_SCRIPT" ]; then
|
||||
echo "Packaging GPU libraries for BUILD_TYPE=${BUILD_TYPE:-cpu}..."
|
||||
source "$GPU_LIB_SCRIPT" "$CURDIR/package/lib"
|
||||
package_gpu_libs
|
||||
fi
|
||||
|
||||
echo "Packaging completed successfully"
|
||||
ls -liah $CURDIR/package/
|
||||
ls -liah $CURDIR/package/lib/
|
||||
52
backend/go/sam3-cpp/run.sh
Executable file
52
backend/go/sam3-cpp/run.sh
Executable file
|
|
@ -0,0 +1,52 @@
|
|||
#!/bin/bash
|
||||
set -ex
|
||||
|
||||
# Get the absolute current dir where the script is located
|
||||
CURDIR=$(dirname "$(realpath $0)")
|
||||
|
||||
cd /
|
||||
|
||||
echo "CPU info:"
|
||||
if [ "$(uname)" != "Darwin" ]; then
|
||||
grep -e "model\sname" /proc/cpuinfo | head -1
|
||||
grep -e "flags" /proc/cpuinfo | head -1
|
||||
fi
|
||||
|
||||
LIBRARY="$CURDIR/libgosam3-fallback.so"
|
||||
|
||||
if [ "$(uname)" != "Darwin" ]; then
|
||||
if grep -q -e "\savx\s" /proc/cpuinfo ; then
|
||||
echo "CPU: AVX found OK"
|
||||
if [ -e $CURDIR/libgosam3-avx.so ]; then
|
||||
LIBRARY="$CURDIR/libgosam3-avx.so"
|
||||
fi
|
||||
fi
|
||||
|
||||
if grep -q -e "\savx2\s" /proc/cpuinfo ; then
|
||||
echo "CPU: AVX2 found OK"
|
||||
if [ -e $CURDIR/libgosam3-avx2.so ]; then
|
||||
LIBRARY="$CURDIR/libgosam3-avx2.so"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check avx 512
|
||||
if grep -q -e "\savx512f\s" /proc/cpuinfo ; then
|
||||
echo "CPU: AVX512F found OK"
|
||||
if [ -e $CURDIR/libgosam3-avx512.so ]; then
|
||||
LIBRARY="$CURDIR/libgosam3-avx512.so"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
export LD_LIBRARY_PATH=$CURDIR/lib:$LD_LIBRARY_PATH
|
||||
export SAM3_LIBRARY=$LIBRARY
|
||||
|
||||
# If there is a lib/ld.so, use it
|
||||
if [ -f $CURDIR/lib/ld.so ]; then
|
||||
echo "Using lib/ld.so"
|
||||
echo "Using library: $LIBRARY"
|
||||
exec $CURDIR/lib/ld.so $CURDIR/sam3-cpp "$@"
|
||||
fi
|
||||
|
||||
echo "Using library: $LIBRARY"
|
||||
exec $CURDIR/sam3-cpp "$@"
|
||||
50
backend/go/sam3-cpp/test.sh
Executable file
50
backend/go/sam3-cpp/test.sh
Executable file
|
|
@ -0,0 +1,50 @@
|
|||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
CURDIR=$(dirname "$(realpath $0)")
|
||||
|
||||
echo "Running sam3-cpp backend tests..."
|
||||
|
||||
# The test requires a SAM model in GGML format.
|
||||
# Uses EdgeTAM Q4_0 (~15MB) for fast CI testing.
|
||||
SAM3_MODEL_DIR="${SAM3_MODEL_DIR:-$CURDIR/test-models}"
|
||||
SAM3_MODEL_FILE="${SAM3_MODEL_FILE:-edgetam_q4_0.ggml}"
|
||||
SAM3_MODEL_URL="${SAM3_MODEL_URL:-https://huggingface.co/PABannier/sam3.cpp/resolve/main/edgetam_q4_0.ggml}"
|
||||
|
||||
# Download model if not present
|
||||
if [ ! -f "$SAM3_MODEL_DIR/$SAM3_MODEL_FILE" ]; then
|
||||
echo "Downloading EdgeTAM Q4_0 model for testing..."
|
||||
mkdir -p "$SAM3_MODEL_DIR"
|
||||
curl -L -o "$SAM3_MODEL_DIR/$SAM3_MODEL_FILE" "$SAM3_MODEL_URL" --progress-bar
|
||||
echo "Model downloaded."
|
||||
fi
|
||||
|
||||
# Create a test image (4x4 red pixel PNG) using base64
|
||||
# This is a minimal valid PNG for testing the pipeline
|
||||
TEST_IMAGE_DIR="$CURDIR/test-data"
|
||||
mkdir -p "$TEST_IMAGE_DIR"
|
||||
|
||||
# Generate a simple test image using Python if available, otherwise use a pre-encoded one
|
||||
if command -v python3 &> /dev/null; then
|
||||
python3 -c "
|
||||
import struct, zlib, base64
|
||||
def create_png(width, height, r, g, b):
|
||||
raw = b''
|
||||
for y in range(height):
|
||||
raw += b'\x00' # filter byte
|
||||
for x in range(width):
|
||||
raw += bytes([r, g, b])
|
||||
def chunk(ctype, data):
|
||||
c = ctype + data
|
||||
return struct.pack('>I', len(data)) + c + struct.pack('>I', zlib.crc32(c) & 0xffffffff)
|
||||
ihdr = struct.pack('>IIBBBBB', width, height, 8, 2, 0, 0, 0)
|
||||
return b'\x89PNG\r\n\x1a\n' + chunk(b'IHDR', ihdr) + chunk(b'IDAT', zlib.compress(raw)) + chunk(b'IEND', b'')
|
||||
with open('$TEST_IMAGE_DIR/test.png', 'wb') as f:
|
||||
f.write(create_png(64, 64, 255, 0, 0))
|
||||
"
|
||||
echo "Test image created."
|
||||
fi
|
||||
|
||||
echo "sam3-cpp test setup complete."
|
||||
echo "Model: $SAM3_MODEL_DIR/$SAM3_MODEL_FILE"
|
||||
echo "Note: Full integration tests run via the LocalAI test-extra target."
|
||||
|
|
@ -8,7 +8,7 @@ JOBS?=$(shell nproc --ignore=1)
|
|||
|
||||
# stablediffusion.cpp (ggml)
|
||||
STABLEDIFFUSION_GGML_REPO?=https://github.com/leejet/stable-diffusion.cpp
|
||||
STABLEDIFFUSION_GGML_VERSION?=8afbeb6ba9702c15d41a38296f2ab1fe5c829fa0
|
||||
STABLEDIFFUSION_GGML_VERSION?=44cca3d626d301e2215d5e243277e8f0e65bfa78
|
||||
|
||||
CMAKE_ARGS+=-DGGML_MAX_NAME=128
|
||||
|
||||
|
|
@ -32,7 +32,7 @@ else ifeq ($(BUILD_TYPE),hipblas)
|
|||
ROCM_PATH ?= /opt/rocm
|
||||
export CXX=$(ROCM_HOME)/llvm/bin/clang++
|
||||
export CC=$(ROCM_HOME)/llvm/bin/clang
|
||||
AMDGPU_TARGETS?=gfx803,gfx900,gfx906,gfx908,gfx90a,gfx942,gfx1010,gfx1030,gfx1032,gfx1100,gfx1101,gfx1102,gfx1200,gfx1201
|
||||
AMDGPU_TARGETS?=gfx908,gfx90a,gfx942,gfx950,gfx1030,gfx1100,gfx1101,gfx1102,gfx1200,gfx1201
|
||||
CMAKE_ARGS+=-DSD_HIPBLAS=ON -DGGML_HIPBLAS=ON -DAMDGPU_TARGETS=$(AMDGPU_TARGETS)
|
||||
else ifeq ($(BUILD_TYPE),vulkan)
|
||||
CMAKE_ARGS+=-DSD_VULKAN=ON -DGGML_VULKAN=ON
|
||||
|
|
|
|||
|
|
@ -26,6 +26,10 @@
|
|||
#include "stb_image_resize.h"
|
||||
#include <stdlib.h>
|
||||
#include <regex>
|
||||
#include <errno.h>
|
||||
#include <signal.h>
|
||||
#include <unistd.h>
|
||||
#include <sys/wait.h>
|
||||
|
||||
|
||||
|
||||
|
|
@ -980,6 +984,256 @@ int gen_image(sd_img_gen_params_t *p, int steps, char *dst, float cfg_scale, cha
|
|||
return !ret;
|
||||
}
|
||||
|
||||
// ---------------- Video generation ----------------
|
||||
|
||||
sd_vid_gen_params_t* sd_vid_gen_params_new(void) {
|
||||
sd_vid_gen_params_t *params = (sd_vid_gen_params_t *)std::malloc(sizeof(sd_vid_gen_params_t));
|
||||
sd_vid_gen_params_init(params);
|
||||
sd_sample_params_init(¶ms->sample_params);
|
||||
sd_sample_params_init(¶ms->high_noise_sample_params);
|
||||
sd_cache_params_init(¶ms->cache);
|
||||
return params;
|
||||
}
|
||||
|
||||
// Persistent storage for cleaned video prompts (kept alive for the duration of generation)
|
||||
static std::string cleaned_vid_prompt_storage;
|
||||
static std::string cleaned_vid_negative_prompt_storage;
|
||||
|
||||
void sd_vid_gen_params_set_prompts(sd_vid_gen_params_t *params, const char *prompt, const char *negative_prompt) {
|
||||
lora_vec.clear();
|
||||
lora_strings.clear();
|
||||
|
||||
std::string prompt_str = prompt ? prompt : "";
|
||||
std::string negative_prompt_str = negative_prompt ? negative_prompt : "";
|
||||
|
||||
const char* lora_dir_to_use = lora_dir_path.empty() ? nullptr : lora_dir_path.c_str();
|
||||
|
||||
auto [loras, cleaned_prompt] = parse_loras_from_prompt(prompt_str, lora_dir_to_use);
|
||||
lora_vec = loras;
|
||||
cleaned_vid_prompt_storage = cleaned_prompt;
|
||||
|
||||
auto [neg_loras, cleaned_negative] = parse_loras_from_prompt(negative_prompt_str, lora_dir_to_use);
|
||||
cleaned_vid_negative_prompt_storage = cleaned_negative;
|
||||
|
||||
params->prompt = cleaned_vid_prompt_storage.c_str();
|
||||
params->negative_prompt = cleaned_vid_negative_prompt_storage.c_str();
|
||||
params->loras = lora_vec.empty() ? nullptr : lora_vec.data();
|
||||
params->lora_count = static_cast<uint32_t>(lora_vec.size());
|
||||
}
|
||||
|
||||
void sd_vid_gen_params_set_dimensions(sd_vid_gen_params_t *params, int width, int height) {
|
||||
params->width = width;
|
||||
params->height = height;
|
||||
}
|
||||
|
||||
void sd_vid_gen_params_set_seed(sd_vid_gen_params_t *params, int64_t seed) {
|
||||
params->seed = seed;
|
||||
}
|
||||
|
||||
void sd_vid_gen_params_set_video_frames(sd_vid_gen_params_t *params, int n) {
|
||||
params->video_frames = n;
|
||||
}
|
||||
|
||||
// Load an image file into an sd_image_t, resizing to target dims if needed.
|
||||
// Returns a heap-allocated buffer the caller must free (or nullptr on failure).
|
||||
static uint8_t* load_and_resize_image(const char* path, int target_width, int target_height, sd_image_t* out) {
|
||||
if (!path || strlen(path) == 0) {
|
||||
*out = {0, 0, 0, nullptr};
|
||||
return nullptr;
|
||||
}
|
||||
int c = 0, img_w = 0, img_h = 0;
|
||||
uint8_t* buf = stbi_load(path, &img_w, &img_h, &c, 3);
|
||||
if (!buf) {
|
||||
fprintf(stderr, "Failed to load image from '%s'\n", path);
|
||||
*out = {0, 0, 0, nullptr};
|
||||
return nullptr;
|
||||
}
|
||||
if (img_w != target_width || img_h != target_height) {
|
||||
fprintf(stderr, "Resizing image from %dx%d to %dx%d\n", img_w, img_h, target_width, target_height);
|
||||
uint8_t* resized = (uint8_t*)malloc((size_t)target_width * target_height * 3);
|
||||
if (!resized) { free(buf); *out = {0, 0, 0, nullptr}; return nullptr; }
|
||||
stbir_resize(buf, img_w, img_h, 0,
|
||||
resized, target_width, target_height, 0, STBIR_TYPE_UINT8,
|
||||
3, STBIR_ALPHA_CHANNEL_NONE, 0,
|
||||
STBIR_EDGE_CLAMP, STBIR_EDGE_CLAMP,
|
||||
STBIR_FILTER_BOX, STBIR_FILTER_BOX,
|
||||
STBIR_COLORSPACE_SRGB, nullptr);
|
||||
free(buf);
|
||||
buf = resized;
|
||||
}
|
||||
*out = {(uint32_t)target_width, (uint32_t)target_height, 3, buf};
|
||||
return buf;
|
||||
}
|
||||
|
||||
// Pipe raw RGB/RGBA frames to ffmpeg stdin and let it produce an MP4 at dst.
|
||||
// Uses fork+execvp to avoid shell interpretation of dst.
|
||||
static int ffmpeg_mux_raw_to_mp4(sd_image_t* frames, int num_frames, int fps, const char* dst) {
|
||||
if (num_frames <= 0 || !frames || !frames[0].data) {
|
||||
fprintf(stderr, "ffmpeg_mux: empty frames\n");
|
||||
return 1;
|
||||
}
|
||||
int width = (int)frames[0].width;
|
||||
int height = (int)frames[0].height;
|
||||
int channels = (int)frames[0].channel;
|
||||
const char* pix_fmt_in = (channels == 4) ? "rgba" : "rgb24";
|
||||
|
||||
char size_str[32];
|
||||
char fps_str[32];
|
||||
snprintf(size_str, sizeof(size_str), "%dx%d", width, height);
|
||||
snprintf(fps_str, sizeof(fps_str), "%d", fps);
|
||||
|
||||
int pipefd[2];
|
||||
if (pipe(pipefd) != 0) { perror("pipe"); return 1; }
|
||||
|
||||
pid_t pid = fork();
|
||||
if (pid < 0) { perror("fork"); close(pipefd[0]); close(pipefd[1]); return 1; }
|
||||
|
||||
if (pid == 0) {
|
||||
// child
|
||||
close(pipefd[1]);
|
||||
if (dup2(pipefd[0], STDIN_FILENO) < 0) { perror("dup2"); _exit(127); }
|
||||
close(pipefd[0]);
|
||||
std::vector<char*> argv = {
|
||||
const_cast<char*>("ffmpeg"),
|
||||
const_cast<char*>("-y"),
|
||||
const_cast<char*>("-hide_banner"),
|
||||
const_cast<char*>("-loglevel"), const_cast<char*>("warning"),
|
||||
const_cast<char*>("-f"), const_cast<char*>("rawvideo"),
|
||||
const_cast<char*>("-pix_fmt"), const_cast<char*>(pix_fmt_in),
|
||||
const_cast<char*>("-s"), size_str,
|
||||
const_cast<char*>("-framerate"), fps_str,
|
||||
const_cast<char*>("-i"), const_cast<char*>("-"),
|
||||
const_cast<char*>("-c:v"), const_cast<char*>("libx264"),
|
||||
const_cast<char*>("-pix_fmt"), const_cast<char*>("yuv420p"),
|
||||
const_cast<char*>("-movflags"), const_cast<char*>("+faststart"),
|
||||
// Force MP4 container. Distributed LocalAI hands us a staging
|
||||
// path (e.g. /staging/localai-output-NNN.tmp) with a non-standard
|
||||
// extension; relying on filename suffix makes ffmpeg bail with
|
||||
// "Unable to choose an output format".
|
||||
const_cast<char*>("-f"), const_cast<char*>("mp4"),
|
||||
const_cast<char*>(dst),
|
||||
nullptr
|
||||
};
|
||||
execvp(argv[0], argv.data());
|
||||
perror("execvp ffmpeg");
|
||||
_exit(127);
|
||||
}
|
||||
|
||||
// parent
|
||||
close(pipefd[0]);
|
||||
|
||||
// Ignore SIGPIPE so a dying ffmpeg surfaces via write() errno instead of killing us.
|
||||
signal(SIGPIPE, SIG_IGN);
|
||||
|
||||
for (int i = 0; i < num_frames; i++) {
|
||||
if (!frames[i].data) continue;
|
||||
size_t frame_bytes = (size_t)frames[i].width * frames[i].height * frames[i].channel;
|
||||
const uint8_t* p = frames[i].data;
|
||||
size_t remaining = frame_bytes;
|
||||
while (remaining > 0) {
|
||||
ssize_t n = write(pipefd[1], p, remaining);
|
||||
if (n < 0) {
|
||||
if (errno == EINTR) continue;
|
||||
perror("write frame to ffmpeg");
|
||||
close(pipefd[1]);
|
||||
int status;
|
||||
waitpid(pid, &status, 0);
|
||||
return 1;
|
||||
}
|
||||
p += n;
|
||||
remaining -= (size_t)n;
|
||||
}
|
||||
}
|
||||
close(pipefd[1]);
|
||||
|
||||
int status = 0;
|
||||
while (waitpid(pid, &status, 0) < 0) {
|
||||
if (errno != EINTR) { perror("waitpid"); return 1; }
|
||||
}
|
||||
if (!WIFEXITED(status) || WEXITSTATUS(status) != 0) {
|
||||
fprintf(stderr, "ffmpeg exited with status %d\n", status);
|
||||
return 1;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
int gen_video(sd_vid_gen_params_t *p, int steps, char *dst, float cfg_scale, int fps, char *init_image, char *end_image) {
|
||||
if (!p) return 1;
|
||||
if (!dst || strlen(dst) == 0) {
|
||||
fprintf(stderr, "gen_video: dst is empty\n");
|
||||
std::free(p);
|
||||
return 1;
|
||||
}
|
||||
|
||||
std::vector<int> skip_layers = {7, 8, 9};
|
||||
|
||||
fprintf(stderr, "Generating video: %dx%d, frames=%d, fps=%d, steps=%d, cfg=%.2f\n",
|
||||
p->width, p->height, p->video_frames, fps, steps, cfg_scale);
|
||||
|
||||
// Sample params (shared by both low and high-noise passes — MoE models use the high-noise
|
||||
// set during the first phase; single-model Wan2.1 ignores it. Same defaults for both is fine.)
|
||||
p->sample_params.guidance.txt_cfg = cfg_scale;
|
||||
p->sample_params.guidance.slg.layers = skip_layers.data();
|
||||
p->sample_params.guidance.slg.layer_count = skip_layers.size();
|
||||
p->sample_params.sample_method = sample_method;
|
||||
p->sample_params.sample_steps = steps;
|
||||
p->sample_params.scheduler = scheduler;
|
||||
p->sample_params.flow_shift = flow_shift;
|
||||
|
||||
p->high_noise_sample_params.guidance.txt_cfg = cfg_scale;
|
||||
p->high_noise_sample_params.guidance.slg.layers = skip_layers.data();
|
||||
p->high_noise_sample_params.guidance.slg.layer_count = skip_layers.size();
|
||||
p->high_noise_sample_params.sample_method = sample_method;
|
||||
p->high_noise_sample_params.sample_steps = steps;
|
||||
p->high_noise_sample_params.scheduler = scheduler;
|
||||
p->high_noise_sample_params.flow_shift = flow_shift;
|
||||
|
||||
// Load init/end reference images if provided (resized to output dims).
|
||||
uint8_t* init_buf = nullptr;
|
||||
uint8_t* end_buf = nullptr;
|
||||
sd_image_t init_img = {0, 0, 0, nullptr};
|
||||
sd_image_t end_img = {0, 0, 0, nullptr};
|
||||
if (init_image && strlen(init_image) > 0) {
|
||||
init_buf = load_and_resize_image(init_image, p->width, p->height, &init_img);
|
||||
if (!init_buf) { std::free(p); return 1; }
|
||||
}
|
||||
if (end_image && strlen(end_image) > 0) {
|
||||
end_buf = load_and_resize_image(end_image, p->width, p->height, &end_img);
|
||||
if (!end_buf) { if (init_buf) free(init_buf); std::free(p); return 1; }
|
||||
}
|
||||
p->init_image = init_img;
|
||||
p->end_image = end_img;
|
||||
|
||||
// Generate
|
||||
int num_frames_out = 0;
|
||||
sd_image_t* frames = generate_video(sd_c, p, &num_frames_out);
|
||||
std::free(p);
|
||||
|
||||
if (!frames || num_frames_out == 0) {
|
||||
fprintf(stderr, "generate_video produced no frames\n");
|
||||
if (init_buf) free(init_buf);
|
||||
if (end_buf) free(end_buf);
|
||||
return 1;
|
||||
}
|
||||
|
||||
fprintf(stderr, "Generated %d frames, muxing to %s via ffmpeg\n", num_frames_out, dst);
|
||||
|
||||
int rc = ffmpeg_mux_raw_to_mp4(frames, num_frames_out, fps, dst);
|
||||
|
||||
for (int i = 0; i < num_frames_out; i++) {
|
||||
if (frames[i].data) free(frames[i].data);
|
||||
}
|
||||
free(frames);
|
||||
if (init_buf) free(init_buf);
|
||||
if (end_buf) free(end_buf);
|
||||
|
||||
if (rc == 0) {
|
||||
fprintf(stderr, "gen_video done: %s\n", dst);
|
||||
}
|
||||
fflush(stderr);
|
||||
return rc;
|
||||
}
|
||||
|
||||
int unload() {
|
||||
free_sd_ctx(sd_c);
|
||||
return 0;
|
||||
|
|
|
|||
|
|
@ -23,6 +23,7 @@ type SDGGML struct {
|
|||
var (
|
||||
LoadModel func(model, model_apth string, options []uintptr, threads int32, diff int) int
|
||||
GenImage func(params uintptr, steps int, dst string, cfgScale float32, srcImage string, strength float32, maskImage string, refImages []uintptr, refImagesCount int) int
|
||||
GenVideo func(params uintptr, steps int, dst string, cfgScale float32, fps int, initImage string, endImage string) int
|
||||
|
||||
TilingParamsSetEnabled func(params uintptr, enabled bool)
|
||||
TilingParamsSetTileSizes func(params uintptr, tileSizeX int, tileSizeY int)
|
||||
|
|
@ -34,6 +35,12 @@ var (
|
|||
ImgGenParamsSetDimensions func(params uintptr, width int, height int)
|
||||
ImgGenParamsSetSeed func(params uintptr, seed int64)
|
||||
ImgGenParamsGetVaeTilingParams func(params uintptr) uintptr
|
||||
|
||||
VidGenParamsNew func() uintptr
|
||||
VidGenParamsSetPrompts func(params uintptr, prompt string, negativePrompt string)
|
||||
VidGenParamsSetDimensions func(params uintptr, width int, height int)
|
||||
VidGenParamsSetSeed func(params uintptr, seed int64)
|
||||
VidGenParamsSetVideoFrames func(params uintptr, n int)
|
||||
)
|
||||
|
||||
// Copied from Purego internal/strings
|
||||
|
|
@ -153,3 +160,58 @@ func (sd *SDGGML) GenerateImage(opts *pb.GenerateImageRequest) error {
|
|||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (sd *SDGGML) GenerateVideo(opts *pb.GenerateVideoRequest) error {
|
||||
dst := opts.Dst
|
||||
if dst == "" {
|
||||
return fmt.Errorf("dst is empty")
|
||||
}
|
||||
|
||||
width := int(opts.Width)
|
||||
height := int(opts.Height)
|
||||
if width == 0 {
|
||||
width = 512
|
||||
}
|
||||
if height == 0 {
|
||||
height = 512
|
||||
}
|
||||
|
||||
numFrames := int(opts.NumFrames)
|
||||
if numFrames <= 0 {
|
||||
numFrames = 16
|
||||
}
|
||||
|
||||
fps := int(opts.Fps)
|
||||
if fps <= 0 {
|
||||
fps = 16
|
||||
}
|
||||
|
||||
steps := int(opts.Step)
|
||||
if steps <= 0 {
|
||||
steps = 20
|
||||
}
|
||||
|
||||
cfg := opts.CfgScale
|
||||
if cfg == 0 {
|
||||
cfg = sd.cfgScale
|
||||
}
|
||||
if cfg == 0 {
|
||||
cfg = 5.0
|
||||
}
|
||||
|
||||
// sd_vid_gen_params_new allocates; gen_video frees it after the generation call.
|
||||
p := VidGenParamsNew()
|
||||
VidGenParamsSetPrompts(p, opts.Prompt, opts.NegativePrompt)
|
||||
VidGenParamsSetDimensions(p, width, height)
|
||||
VidGenParamsSetSeed(p, int64(opts.Seed))
|
||||
VidGenParamsSetVideoFrames(p, numFrames)
|
||||
|
||||
fmt.Fprintf(os.Stderr, "GenerateVideo: dst=%s size=%dx%d frames=%d fps=%d steps=%d cfg=%.2f\n",
|
||||
dst, width, height, numFrames, fps, steps, cfg)
|
||||
|
||||
ret := GenVideo(p, steps, dst, cfg, fps, opts.StartImage, opts.EndImage)
|
||||
if ret != 0 {
|
||||
return fmt.Errorf("video inference failed (code %d)", ret)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
|
|||
|
|
@ -18,6 +18,13 @@ void sd_img_gen_params_set_seed(sd_img_gen_params_t *params, int64_t seed);
|
|||
|
||||
int load_model(const char *model, char *model_path, char* options[], int threads, int diffusionModel);
|
||||
int gen_image(sd_img_gen_params_t *p, int steps, char *dst, float cfg_scale, char *src_image, float strength, char *mask_image, char* ref_images[], int ref_images_count);
|
||||
|
||||
sd_vid_gen_params_t* sd_vid_gen_params_new(void);
|
||||
void sd_vid_gen_params_set_prompts(sd_vid_gen_params_t *params, const char *prompt, const char *negative_prompt);
|
||||
void sd_vid_gen_params_set_dimensions(sd_vid_gen_params_t *params, int width, int height);
|
||||
void sd_vid_gen_params_set_seed(sd_vid_gen_params_t *params, int64_t seed);
|
||||
void sd_vid_gen_params_set_video_frames(sd_vid_gen_params_t *params, int n);
|
||||
int gen_video(sd_vid_gen_params_t *p, int steps, char *dst, float cfg_scale, int fps, char *init_image, char *end_image);
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
|
|
|
|||
|
|
@ -32,6 +32,7 @@ func main() {
|
|||
libFuncs := []LibFuncs{
|
||||
{&LoadModel, "load_model"},
|
||||
{&GenImage, "gen_image"},
|
||||
{&GenVideo, "gen_video"},
|
||||
{&TilingParamsSetEnabled, "sd_tiling_params_set_enabled"},
|
||||
{&TilingParamsSetTileSizes, "sd_tiling_params_set_tile_sizes"},
|
||||
{&TilingParamsSetRelSizes, "sd_tiling_params_set_rel_sizes"},
|
||||
|
|
@ -42,6 +43,12 @@ func main() {
|
|||
{&ImgGenParamsSetDimensions, "sd_img_gen_params_set_dimensions"},
|
||||
{&ImgGenParamsSetSeed, "sd_img_gen_params_set_seed"},
|
||||
{&ImgGenParamsGetVaeTilingParams, "sd_img_gen_params_get_vae_tiling_params"},
|
||||
|
||||
{&VidGenParamsNew, "sd_vid_gen_params_new"},
|
||||
{&VidGenParamsSetPrompts, "sd_vid_gen_params_set_prompts"},
|
||||
{&VidGenParamsSetDimensions, "sd_vid_gen_params_set_dimensions"},
|
||||
{&VidGenParamsSetSeed, "sd_vid_gen_params_set_seed"},
|
||||
{&VidGenParamsSetVideoFrames, "sd_vid_gen_params_set_video_frames"},
|
||||
}
|
||||
|
||||
for _, lf := range libFuncs {
|
||||
|
|
|
|||
|
|
@ -56,5 +56,6 @@ func (v *Voxtral) AudioTranscription(opts *pb.TranscriptRequest) (pb.TranscriptR
|
|||
return pb.TranscriptResult{
|
||||
Segments: segments,
|
||||
Text: text,
|
||||
Language: opts.Language,
|
||||
}, nil
|
||||
}
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ JOBS?=$(shell nproc --ignore=1)
|
|||
|
||||
# whisper.cpp version
|
||||
WHISPER_REPO?=https://github.com/ggml-org/whisper.cpp
|
||||
WHISPER_CPP_VERSION?=95ea8f9bfb03a15db08a8989966fd1ae3361e20d
|
||||
WHISPER_CPP_VERSION?=fc674574ca27cac59a15e5b22a09b9d9ad62aafe
|
||||
SO_TARGET?=libgowhisper.so
|
||||
|
||||
CMAKE_ARGS+=-DBUILD_SHARED_LIBS=OFF
|
||||
|
|
|
|||
|
|
@ -120,6 +120,12 @@ func (w *Whisper) AudioTranscription(opts *pb.TranscriptRequest) (pb.TranscriptR
|
|||
}
|
||||
|
||||
data := buf.AsFloat32Buffer().Data
|
||||
// whisper.cpp resamples to 16 kHz internally; this matches buf.Format.SampleRate
|
||||
// for the converted file produced by AudioToWav above.
|
||||
var duration float32
|
||||
if buf.Format != nil && buf.Format.SampleRate > 0 {
|
||||
duration = float32(len(data)) / float32(buf.Format.SampleRate)
|
||||
}
|
||||
segsLen := uintptr(0xdeadbeef)
|
||||
segsLenPtr := unsafe.Pointer(&segsLen)
|
||||
|
||||
|
|
@ -158,5 +164,7 @@ func (w *Whisper) AudioTranscription(opts *pb.TranscriptRequest) (pb.TranscriptR
|
|||
return pb.TranscriptResult{
|
||||
Segments: segments,
|
||||
Text: strings.TrimSpace(text),
|
||||
Language: opts.Language,
|
||||
Duration: duration,
|
||||
}, nil
|
||||
}
|
||||
|
|
|
|||
|
|
@ -29,6 +29,49 @@
|
|||
nvidia-cuda-12: "cuda12-llama-cpp"
|
||||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-llama-cpp"
|
||||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-llama-cpp"
|
||||
- &ikllamacpp
|
||||
name: "ik-llama-cpp"
|
||||
alias: "ik-llama-cpp"
|
||||
license: mit
|
||||
description: |
|
||||
Fork of llama.cpp optimized for CPU performance by ikawrakow
|
||||
urls:
|
||||
- https://github.com/ikawrakow/ik_llama.cpp
|
||||
tags:
|
||||
- text-to-text
|
||||
- LLM
|
||||
- CPU
|
||||
capabilities:
|
||||
default: "cpu-ik-llama-cpp"
|
||||
- &turboquant
|
||||
name: "turboquant"
|
||||
alias: "turboquant"
|
||||
license: mit
|
||||
description: |
|
||||
Fork of llama.cpp adding the TurboQuant KV-cache quantization scheme.
|
||||
Reuses the LocalAI llama.cpp gRPC server sources against the fork's libllama.
|
||||
urls:
|
||||
- https://github.com/TheTom/llama-cpp-turboquant
|
||||
tags:
|
||||
- text-to-text
|
||||
- LLM
|
||||
- CPU
|
||||
- GPU
|
||||
- CUDA
|
||||
- HIP
|
||||
- turboquant
|
||||
- kv-cache
|
||||
capabilities:
|
||||
default: "cpu-turboquant"
|
||||
nvidia: "cuda12-turboquant"
|
||||
intel: "intel-sycl-f16-turboquant"
|
||||
amd: "rocm-turboquant"
|
||||
vulkan: "vulkan-turboquant"
|
||||
nvidia-l4t: "nvidia-l4t-arm64-turboquant"
|
||||
nvidia-cuda-13: "cuda13-turboquant"
|
||||
nvidia-cuda-12: "cuda12-turboquant"
|
||||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-turboquant"
|
||||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-turboquant"
|
||||
- &whispercpp
|
||||
name: "whisper"
|
||||
alias: "whisper"
|
||||
|
|
@ -125,6 +168,31 @@
|
|||
nvidia-cuda-13: "cuda13-rfdetr"
|
||||
nvidia-cuda-12: "cuda12-rfdetr"
|
||||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-rfdetr"
|
||||
- &sam3cpp
|
||||
name: "sam3-cpp"
|
||||
alias: "sam3-cpp"
|
||||
license: mit
|
||||
description: |
|
||||
Segment Anything Model (SAM 3/2/EdgeTAM) in C/C++ using GGML.
|
||||
Supports text-prompted and point/box-prompted image segmentation.
|
||||
urls:
|
||||
- https://github.com/PABannier/sam3.cpp
|
||||
tags:
|
||||
- image-segmentation
|
||||
- object-detection
|
||||
- sam3
|
||||
- gpu
|
||||
- cpu
|
||||
capabilities:
|
||||
default: "cpu-sam3-cpp"
|
||||
nvidia: "cuda12-sam3-cpp"
|
||||
nvidia-cuda-12: "cuda12-sam3-cpp"
|
||||
nvidia-cuda-13: "cuda13-sam3-cpp"
|
||||
nvidia-l4t: "nvidia-l4t-arm64-sam3-cpp"
|
||||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-sam3-cpp"
|
||||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-sam3-cpp"
|
||||
intel: "intel-sycl-f32-sam3-cpp"
|
||||
vulkan: "vulkan-sam3-cpp"
|
||||
- &vllm
|
||||
name: "vllm"
|
||||
license: apache-2.0
|
||||
|
|
@ -158,6 +226,29 @@
|
|||
amd: "rocm-vllm"
|
||||
intel: "intel-vllm"
|
||||
nvidia-cuda-12: "cuda12-vllm"
|
||||
cpu: "cpu-vllm"
|
||||
- &sglang
|
||||
name: "sglang"
|
||||
license: apache-2.0
|
||||
urls:
|
||||
- https://github.com/sgl-project/sglang
|
||||
tags:
|
||||
- text-to-text
|
||||
- multimodal
|
||||
icon: https://raw.githubusercontent.com/sgl-project/sglang/main/assets/logo.png
|
||||
description: |
|
||||
SGLang is a fast serving framework for large language models and vision language models.
|
||||
It co-designs the backend runtime (RadixAttention, continuous batching, structured
|
||||
decoding) and the frontend language to make interaction with models faster and more
|
||||
controllable. Features include fast backend runtime, flexible frontend language,
|
||||
extensive model support, and an active community.
|
||||
alias: "sglang"
|
||||
capabilities:
|
||||
nvidia: "cuda12-sglang"
|
||||
amd: "rocm-sglang"
|
||||
intel: "intel-sglang"
|
||||
nvidia-cuda-12: "cuda12-sglang"
|
||||
cpu: "cpu-sglang"
|
||||
- &vllm-omni
|
||||
name: "vllm-omni"
|
||||
license: apache-2.0
|
||||
|
|
@ -292,6 +383,34 @@
|
|||
intel: "intel-rerankers"
|
||||
amd: "rocm-rerankers"
|
||||
metal: "metal-rerankers"
|
||||
- &tinygrad
|
||||
name: "tinygrad"
|
||||
alias: "tinygrad"
|
||||
license: MIT
|
||||
description: |
|
||||
tinygrad is a minimalist deep-learning framework with zero runtime
|
||||
dependencies that targets CUDA, ROCm, Metal, WebGPU and CPU (CLANG).
|
||||
The LocalAI tinygrad backend exposes a single multimodal runtime that
|
||||
covers LLM text generation (Llama / Qwen / Mistral via safetensors or
|
||||
GGUF) with native tool-call extraction, BERT-family embeddings,
|
||||
Stable Diffusion 1.x / 2 / XL image generation, and Whisper speech-to-text.
|
||||
|
||||
Single image: tinygrad generates its own GPU kernels and dlopens the
|
||||
host driver libraries at runtime, so there is no per-toolkit build
|
||||
split. The same image runs CPU-only or accelerates against
|
||||
CUDA / ROCm / Metal when the host driver is visible.
|
||||
urls:
|
||||
- https://github.com/tinygrad/tinygrad
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-tinygrad"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-tinygrad
|
||||
tags:
|
||||
- text-to-text
|
||||
- LLM
|
||||
- embeddings
|
||||
- image-generation
|
||||
- transcription
|
||||
- multimodal
|
||||
- &transformers
|
||||
name: "transformers"
|
||||
icon: https://avatars.githubusercontent.com/u/25720743?s=200&v=4
|
||||
|
|
@ -387,6 +506,30 @@
|
|||
nvidia-l4t: "nvidia-l4t-arm64-acestep-cpp"
|
||||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-acestep-cpp"
|
||||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-acestep-cpp"
|
||||
- &qwen3ttscpp
|
||||
name: "qwen3-tts-cpp"
|
||||
description: |
|
||||
Qwen3-TTS C++ backend using GGML. Native C++ text-to-speech with voice cloning support.
|
||||
Generates 24kHz mono audio from text with optional reference audio for voice cloning via ECAPA-TDNN speaker embeddings.
|
||||
urls:
|
||||
- https://github.com/predict-woo/qwen3-tts.cpp
|
||||
tags:
|
||||
- text-to-speech
|
||||
- tts
|
||||
- voice-cloning
|
||||
alias: "qwen3-tts-cpp"
|
||||
capabilities:
|
||||
default: "cpu-qwen3-tts-cpp"
|
||||
nvidia: "cuda12-qwen3-tts-cpp"
|
||||
nvidia-cuda-13: "cuda13-qwen3-tts-cpp"
|
||||
nvidia-cuda-12: "cuda12-qwen3-tts-cpp"
|
||||
intel: "intel-sycl-f16-qwen3-tts-cpp"
|
||||
metal: "metal-qwen3-tts-cpp"
|
||||
amd: "rocm-qwen3-tts-cpp"
|
||||
vulkan: "vulkan-qwen3-tts-cpp"
|
||||
nvidia-l4t: "nvidia-l4t-arm64-qwen3-tts-cpp"
|
||||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-qwen3-tts-cpp"
|
||||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-qwen3-tts-cpp"
|
||||
- &faster-whisper
|
||||
icon: https://avatars.githubusercontent.com/u/1520500?s=200&v=4
|
||||
description: |
|
||||
|
|
@ -400,12 +543,15 @@
|
|||
license: MIT
|
||||
name: "faster-whisper"
|
||||
capabilities:
|
||||
default: "cpu-faster-whisper"
|
||||
nvidia: "cuda12-faster-whisper"
|
||||
intel: "intel-faster-whisper"
|
||||
amd: "rocm-faster-whisper"
|
||||
metal: "metal-faster-whisper"
|
||||
nvidia-cuda-13: "cuda13-faster-whisper"
|
||||
nvidia-cuda-12: "cuda12-faster-whisper"
|
||||
nvidia-l4t: "nvidia-l4t-arm64-faster-whisper"
|
||||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-faster-whisper"
|
||||
- &moonshine
|
||||
description: |
|
||||
Moonshine is a fast, accurate, and efficient speech-to-text transcription model using ONNX Runtime.
|
||||
|
|
@ -438,6 +584,7 @@
|
|||
- whisperx
|
||||
license: BSD-4-Clause
|
||||
name: "whisperx"
|
||||
alias: "whisperx"
|
||||
capabilities:
|
||||
nvidia: "cuda12-whisperx"
|
||||
amd: "rocm-whisperx"
|
||||
|
|
@ -445,6 +592,8 @@
|
|||
default: "cpu-whisperx"
|
||||
nvidia-cuda-13: "cuda13-whisperx"
|
||||
nvidia-cuda-12: "cuda12-whisperx"
|
||||
nvidia-l4t: "nvidia-l4t-arm64-whisperx"
|
||||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-whisperx"
|
||||
- &kokoro
|
||||
icon: https://avatars.githubusercontent.com/u/166769057?v=4
|
||||
description: |
|
||||
|
|
@ -468,6 +617,26 @@
|
|||
nvidia-cuda-13: "cuda13-kokoro"
|
||||
nvidia-cuda-12: "cuda12-kokoro"
|
||||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-kokoro"
|
||||
- &kokoros
|
||||
icon: https://avatars.githubusercontent.com/u/166769057?v=4
|
||||
description: |
|
||||
Kokoros is a pure Rust TTS backend using the Kokoro ONNX model (82M parameters).
|
||||
It provides fast, high-quality text-to-speech with streaming support, built on
|
||||
ONNX Runtime for efficient CPU inference. Supports English, Japanese, Mandarin
|
||||
Chinese, and German.
|
||||
urls:
|
||||
- https://huggingface.co/hexgrad/Kokoro-82M
|
||||
- https://github.com/lucasjinreal/Kokoros
|
||||
tags:
|
||||
- text-to-speech
|
||||
- TTS
|
||||
- Rust
|
||||
- ONNX
|
||||
license: apache-2.0
|
||||
alias: "kokoros"
|
||||
name: "kokoros"
|
||||
capabilities:
|
||||
default: "cpu-kokoros"
|
||||
- &coqui
|
||||
urls:
|
||||
- https://github.com/idiap/coqui-ai-TTS
|
||||
|
|
@ -822,6 +991,37 @@
|
|||
nvidia-cuda-12: "cuda12-llama-cpp-development"
|
||||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-llama-cpp-development"
|
||||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-llama-cpp-development"
|
||||
- !!merge <<: *ikllamacpp
|
||||
name: "ik-llama-cpp-development"
|
||||
capabilities:
|
||||
default: "cpu-ik-llama-cpp-development"
|
||||
- !!merge <<: *turboquant
|
||||
name: "turboquant-development"
|
||||
capabilities:
|
||||
default: "cpu-turboquant-development"
|
||||
nvidia: "cuda12-turboquant-development"
|
||||
intel: "intel-sycl-f16-turboquant-development"
|
||||
amd: "rocm-turboquant-development"
|
||||
vulkan: "vulkan-turboquant-development"
|
||||
nvidia-l4t: "nvidia-l4t-arm64-turboquant-development"
|
||||
nvidia-cuda-13: "cuda13-turboquant-development"
|
||||
nvidia-cuda-12: "cuda12-turboquant-development"
|
||||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-turboquant-development"
|
||||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-turboquant-development"
|
||||
- !!merge <<: *stablediffusionggml
|
||||
name: "stablediffusion-ggml-development"
|
||||
capabilities:
|
||||
default: "cpu-stablediffusion-ggml-development"
|
||||
nvidia: "cuda12-stablediffusion-ggml-development"
|
||||
intel: "intel-sycl-f16-stablediffusion-ggml-development"
|
||||
# amd: "rocm-stablediffusion-ggml-development"
|
||||
vulkan: "vulkan-stablediffusion-ggml-development"
|
||||
nvidia-l4t: "nvidia-l4t-arm64-stablediffusion-ggml-development"
|
||||
metal: "metal-stablediffusion-ggml-development"
|
||||
nvidia-cuda-13: "cuda13-stablediffusion-ggml-development"
|
||||
nvidia-cuda-12: "cuda12-stablediffusion-ggml-development"
|
||||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-stablediffusion-ggml-development"
|
||||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-stablediffusion-ggml-development"
|
||||
- !!merge <<: *neutts
|
||||
name: "cpu-neutts"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-neutts"
|
||||
|
|
@ -1252,6 +1452,108 @@
|
|||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-llama-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-gpu-nvidia-cuda-13-llama-cpp
|
||||
## ik-llama-cpp
|
||||
- !!merge <<: *ikllamacpp
|
||||
name: "cpu-ik-llama-cpp"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-ik-llama-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-cpu-ik-llama-cpp
|
||||
- !!merge <<: *ikllamacpp
|
||||
name: "cpu-ik-llama-cpp-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-ik-llama-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-cpu-ik-llama-cpp
|
||||
## turboquant
|
||||
- !!merge <<: *turboquant
|
||||
name: "cpu-turboquant"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-turboquant"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-cpu-turboquant
|
||||
- !!merge <<: *turboquant
|
||||
name: "cpu-turboquant-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-turboquant"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-cpu-turboquant
|
||||
- !!merge <<: *turboquant
|
||||
name: "cuda12-turboquant"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-turboquant"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-turboquant
|
||||
- !!merge <<: *turboquant
|
||||
name: "cuda12-turboquant-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-turboquant"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-gpu-nvidia-cuda-12-turboquant
|
||||
- !!merge <<: *turboquant
|
||||
name: "cuda13-turboquant"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-turboquant"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-turboquant
|
||||
- !!merge <<: *turboquant
|
||||
name: "cuda13-turboquant-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-turboquant"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-gpu-nvidia-cuda-13-turboquant
|
||||
- !!merge <<: *turboquant
|
||||
name: "rocm-turboquant"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-turboquant"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-gpu-rocm-hipblas-turboquant
|
||||
- !!merge <<: *turboquant
|
||||
name: "rocm-turboquant-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-turboquant"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-gpu-rocm-hipblas-turboquant
|
||||
- !!merge <<: *turboquant
|
||||
name: "intel-sycl-f32-turboquant"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f32-turboquant"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-gpu-intel-sycl-f32-turboquant
|
||||
- !!merge <<: *turboquant
|
||||
name: "intel-sycl-f32-turboquant-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f32-turboquant"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-gpu-intel-sycl-f32-turboquant
|
||||
- !!merge <<: *turboquant
|
||||
name: "intel-sycl-f16-turboquant"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f16-turboquant"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-gpu-intel-sycl-f16-turboquant
|
||||
- !!merge <<: *turboquant
|
||||
name: "intel-sycl-f16-turboquant-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f16-turboquant"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-gpu-intel-sycl-f16-turboquant
|
||||
- !!merge <<: *turboquant
|
||||
name: "vulkan-turboquant"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-vulkan-turboquant"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-gpu-vulkan-turboquant
|
||||
- !!merge <<: *turboquant
|
||||
name: "vulkan-turboquant-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-vulkan-turboquant"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-gpu-vulkan-turboquant
|
||||
- !!merge <<: *turboquant
|
||||
name: "nvidia-l4t-arm64-turboquant"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-turboquant"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-nvidia-l4t-arm64-turboquant
|
||||
- !!merge <<: *turboquant
|
||||
name: "nvidia-l4t-arm64-turboquant-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-turboquant"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-nvidia-l4t-arm64-turboquant
|
||||
- !!merge <<: *turboquant
|
||||
name: "cuda13-nvidia-l4t-arm64-turboquant"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-turboquant"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-turboquant
|
||||
- !!merge <<: *turboquant
|
||||
name: "cuda13-nvidia-l4t-arm64-turboquant-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-turboquant"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-turboquant
|
||||
## whisper
|
||||
- !!merge <<: *whispercpp
|
||||
name: "nvidia-l4t-arm64-whisper"
|
||||
|
|
@ -1459,6 +1761,7 @@
|
|||
nvidia: "cuda12-vllm-development"
|
||||
amd: "rocm-vllm-development"
|
||||
intel: "intel-vllm-development"
|
||||
cpu: "cpu-vllm-development"
|
||||
- !!merge <<: *vllm
|
||||
name: "cuda12-vllm"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-vllm"
|
||||
|
|
@ -1474,6 +1777,11 @@
|
|||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-vllm"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-gpu-intel-vllm
|
||||
- !!merge <<: *vllm
|
||||
name: "cpu-vllm"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-vllm"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-cpu-vllm
|
||||
- !!merge <<: *vllm
|
||||
name: "cuda12-vllm-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-vllm"
|
||||
|
|
@ -1489,6 +1797,59 @@
|
|||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-vllm"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-gpu-intel-vllm
|
||||
- !!merge <<: *vllm
|
||||
name: "cpu-vllm-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-vllm"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-cpu-vllm
|
||||
# sglang
|
||||
- !!merge <<: *sglang
|
||||
name: "sglang-development"
|
||||
capabilities:
|
||||
nvidia: "cuda12-sglang-development"
|
||||
amd: "rocm-sglang-development"
|
||||
intel: "intel-sglang-development"
|
||||
cpu: "cpu-sglang-development"
|
||||
- !!merge <<: *sglang
|
||||
name: "cuda12-sglang"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-sglang"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-sglang
|
||||
- !!merge <<: *sglang
|
||||
name: "rocm-sglang"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-sglang"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-gpu-rocm-hipblas-sglang
|
||||
- !!merge <<: *sglang
|
||||
name: "intel-sglang"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sglang"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-gpu-intel-sglang
|
||||
- !!merge <<: *sglang
|
||||
name: "cpu-sglang"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-sglang"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-cpu-sglang
|
||||
- !!merge <<: *sglang
|
||||
name: "cuda12-sglang-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-sglang"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-gpu-nvidia-cuda-12-sglang
|
||||
- !!merge <<: *sglang
|
||||
name: "rocm-sglang-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-sglang"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-gpu-rocm-hipblas-sglang
|
||||
- !!merge <<: *sglang
|
||||
name: "intel-sglang-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sglang"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-gpu-intel-sglang
|
||||
- !!merge <<: *sglang
|
||||
name: "cpu-sglang-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-sglang"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-cpu-sglang
|
||||
# vllm-omni
|
||||
- !!merge <<: *vllm-omni
|
||||
name: "vllm-omni-development"
|
||||
|
|
@ -1602,6 +1963,89 @@
|
|||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-rfdetr"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-metal-darwin-arm64-rfdetr
|
||||
## sam3-cpp
|
||||
- !!merge <<: *sam3cpp
|
||||
name: "sam3-cpp-development"
|
||||
capabilities:
|
||||
default: "cpu-sam3-cpp-development"
|
||||
nvidia: "cuda12-sam3-cpp-development"
|
||||
nvidia-cuda-12: "cuda12-sam3-cpp-development"
|
||||
nvidia-cuda-13: "cuda13-sam3-cpp-development"
|
||||
nvidia-l4t: "nvidia-l4t-arm64-sam3-cpp-development"
|
||||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-sam3-cpp-development"
|
||||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-sam3-cpp-development"
|
||||
intel: "intel-sycl-f32-sam3-cpp-development"
|
||||
vulkan: "vulkan-sam3-cpp-development"
|
||||
- !!merge <<: *sam3cpp
|
||||
name: "cpu-sam3-cpp"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-sam3-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-cpu-sam3-cpp
|
||||
- !!merge <<: *sam3cpp
|
||||
name: "cpu-sam3-cpp-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-sam3-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-cpu-sam3-cpp
|
||||
- !!merge <<: *sam3cpp
|
||||
name: "cuda12-sam3-cpp"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-sam3-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-sam3-cpp
|
||||
- !!merge <<: *sam3cpp
|
||||
name: "cuda12-sam3-cpp-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-sam3-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-gpu-nvidia-cuda-12-sam3-cpp
|
||||
- !!merge <<: *sam3cpp
|
||||
name: "cuda13-sam3-cpp"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-sam3-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-sam3-cpp
|
||||
- !!merge <<: *sam3cpp
|
||||
name: "cuda13-sam3-cpp-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-sam3-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-gpu-nvidia-cuda-13-sam3-cpp
|
||||
- !!merge <<: *sam3cpp
|
||||
name: "nvidia-l4t-arm64-sam3-cpp"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-sam3-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-nvidia-l4t-arm64-sam3-cpp
|
||||
- !!merge <<: *sam3cpp
|
||||
name: "nvidia-l4t-arm64-sam3-cpp-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-sam3-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-nvidia-l4t-arm64-sam3-cpp
|
||||
- !!merge <<: *sam3cpp
|
||||
name: "cuda13-nvidia-l4t-arm64-sam3-cpp"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-sam3-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-sam3-cpp
|
||||
- !!merge <<: *sam3cpp
|
||||
name: "cuda13-nvidia-l4t-arm64-sam3-cpp-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-sam3-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-sam3-cpp
|
||||
- !!merge <<: *sam3cpp
|
||||
name: "intel-sycl-f32-sam3-cpp"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f32-sam3-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-gpu-intel-sycl-f32-sam3-cpp
|
||||
- !!merge <<: *sam3cpp
|
||||
name: "intel-sycl-f32-sam3-cpp-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f32-sam3-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-gpu-intel-sycl-f32-sam3-cpp
|
||||
- !!merge <<: *sam3cpp
|
||||
name: "vulkan-sam3-cpp"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-vulkan-sam3-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-gpu-vulkan-sam3-cpp
|
||||
- !!merge <<: *sam3cpp
|
||||
name: "vulkan-sam3-cpp-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-vulkan-sam3-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-gpu-vulkan-sam3-cpp
|
||||
## Rerankers
|
||||
- !!merge <<: *rerankers
|
||||
name: "rerankers-development"
|
||||
|
|
@ -1661,6 +2105,15 @@
|
|||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-rerankers"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-metal-darwin-arm64-rerankers
|
||||
## tinygrad
|
||||
## Single image — the meta anchor above carries the latest uri directly
|
||||
## since there is only one variant. The development entry below points at
|
||||
## the master tag.
|
||||
- !!merge <<: *tinygrad
|
||||
name: "tinygrad-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-tinygrad"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-tinygrad
|
||||
## Transformers
|
||||
- !!merge <<: *transformers
|
||||
name: "transformers-development"
|
||||
|
|
@ -1973,6 +2426,107 @@
|
|||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-acestep-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-gpu-nvidia-cuda-13-acestep-cpp
|
||||
## qwen3-tts-cpp
|
||||
- !!merge <<: *qwen3ttscpp
|
||||
name: "nvidia-l4t-arm64-qwen3-tts-cpp"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-qwen3-tts-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-nvidia-l4t-arm64-qwen3-tts-cpp
|
||||
- !!merge <<: *qwen3ttscpp
|
||||
name: "nvidia-l4t-arm64-qwen3-tts-cpp-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-qwen3-tts-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-nvidia-l4t-arm64-qwen3-tts-cpp
|
||||
- !!merge <<: *qwen3ttscpp
|
||||
name: "cuda13-nvidia-l4t-arm64-qwen3-tts-cpp"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-qwen3-tts-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-qwen3-tts-cpp
|
||||
- !!merge <<: *qwen3ttscpp
|
||||
name: "cuda13-nvidia-l4t-arm64-qwen3-tts-cpp-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-qwen3-tts-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-qwen3-tts-cpp
|
||||
- !!merge <<: *qwen3ttscpp
|
||||
name: "cpu-qwen3-tts-cpp"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-qwen3-tts-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-cpu-qwen3-tts-cpp
|
||||
- !!merge <<: *qwen3ttscpp
|
||||
name: "metal-qwen3-tts-cpp"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-qwen3-tts-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-metal-darwin-arm64-qwen3-tts-cpp
|
||||
- !!merge <<: *qwen3ttscpp
|
||||
name: "metal-qwen3-tts-cpp-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-qwen3-tts-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-metal-darwin-arm64-qwen3-tts-cpp
|
||||
- !!merge <<: *qwen3ttscpp
|
||||
name: "cpu-qwen3-tts-cpp-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-qwen3-tts-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-cpu-qwen3-tts-cpp
|
||||
- !!merge <<: *qwen3ttscpp
|
||||
name: "cuda12-qwen3-tts-cpp"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-qwen3-tts-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-qwen3-tts-cpp
|
||||
- !!merge <<: *qwen3ttscpp
|
||||
name: "rocm-qwen3-tts-cpp"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-qwen3-tts-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-gpu-rocm-hipblas-qwen3-tts-cpp
|
||||
- !!merge <<: *qwen3ttscpp
|
||||
name: "intel-sycl-f32-qwen3-tts-cpp"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f32-qwen3-tts-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-gpu-intel-sycl-f32-qwen3-tts-cpp
|
||||
- !!merge <<: *qwen3ttscpp
|
||||
name: "intel-sycl-f16-qwen3-tts-cpp"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f16-qwen3-tts-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-gpu-intel-sycl-f16-qwen3-tts-cpp
|
||||
- !!merge <<: *qwen3ttscpp
|
||||
name: "vulkan-qwen3-tts-cpp"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-vulkan-qwen3-tts-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-gpu-vulkan-qwen3-tts-cpp
|
||||
- !!merge <<: *qwen3ttscpp
|
||||
name: "vulkan-qwen3-tts-cpp-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-vulkan-qwen3-tts-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-gpu-vulkan-qwen3-tts-cpp
|
||||
- !!merge <<: *qwen3ttscpp
|
||||
name: "cuda12-qwen3-tts-cpp-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-qwen3-tts-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-gpu-nvidia-cuda-12-qwen3-tts-cpp
|
||||
- !!merge <<: *qwen3ttscpp
|
||||
name: "rocm-qwen3-tts-cpp-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-qwen3-tts-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-gpu-rocm-hipblas-qwen3-tts-cpp
|
||||
- !!merge <<: *qwen3ttscpp
|
||||
name: "intel-sycl-f32-qwen3-tts-cpp-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f32-qwen3-tts-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-gpu-intel-sycl-f32-qwen3-tts-cpp
|
||||
- !!merge <<: *qwen3ttscpp
|
||||
name: "intel-sycl-f16-qwen3-tts-cpp-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f16-qwen3-tts-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-gpu-intel-sycl-f16-qwen3-tts-cpp
|
||||
- !!merge <<: *qwen3ttscpp
|
||||
name: "cuda13-qwen3-tts-cpp"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-qwen3-tts-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-qwen3-tts-cpp
|
||||
- !!merge <<: *qwen3ttscpp
|
||||
name: "cuda13-qwen3-tts-cpp-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-qwen3-tts-cpp"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-gpu-nvidia-cuda-13-qwen3-tts-cpp
|
||||
## kokoro
|
||||
- !!merge <<: *kokoro
|
||||
name: "kokoro-development"
|
||||
|
|
@ -2042,15 +2596,32 @@
|
|||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-kokoro"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-metal-darwin-arm64-kokoro
|
||||
## kokoros (Rust)
|
||||
- !!merge <<: *kokoros
|
||||
name: "kokoros-development"
|
||||
capabilities:
|
||||
default: "cpu-kokoros-development"
|
||||
- !!merge <<: *kokoros
|
||||
name: "cpu-kokoros"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-kokoros"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-cpu-kokoros
|
||||
- !!merge <<: *kokoros
|
||||
name: "cpu-kokoros-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-kokoros"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-cpu-kokoros
|
||||
## faster-whisper
|
||||
- !!merge <<: *faster-whisper
|
||||
name: "faster-whisper-development"
|
||||
capabilities:
|
||||
default: "cpu-faster-whisper-development"
|
||||
nvidia: "cuda12-faster-whisper-development"
|
||||
intel: "intel-faster-whisper-development"
|
||||
amd: "rocm-faster-whisper-development"
|
||||
metal: "metal-faster-whisper-development"
|
||||
nvidia-cuda-13: "cuda13-faster-whisper-development"
|
||||
nvidia-l4t: "nvidia-l4t-arm64-faster-whisper-development"
|
||||
- !!merge <<: *faster-whisper
|
||||
name: "cuda12-faster-whisper-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-faster-whisper"
|
||||
|
|
@ -2091,6 +2662,36 @@
|
|||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-faster-whisper"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-metal-darwin-arm64-faster-whisper
|
||||
- !!merge <<: *faster-whisper
|
||||
name: "cuda12-faster-whisper"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-faster-whisper"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-faster-whisper
|
||||
- !!merge <<: *faster-whisper
|
||||
name: "rocm-faster-whisper"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-faster-whisper"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-gpu-rocm-hipblas-faster-whisper
|
||||
- !!merge <<: *faster-whisper
|
||||
name: "cpu-faster-whisper"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-faster-whisper"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-cpu-faster-whisper
|
||||
- !!merge <<: *faster-whisper
|
||||
name: "cpu-faster-whisper-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-faster-whisper"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-cpu-faster-whisper
|
||||
- !!merge <<: *faster-whisper
|
||||
name: "nvidia-l4t-arm64-faster-whisper"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-faster-whisper"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-nvidia-l4t-faster-whisper
|
||||
- !!merge <<: *faster-whisper
|
||||
name: "nvidia-l4t-arm64-faster-whisper-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-faster-whisper"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-nvidia-l4t-faster-whisper
|
||||
## moonshine
|
||||
- !!merge <<: *moonshine
|
||||
name: "moonshine-development"
|
||||
|
|
@ -2149,6 +2750,7 @@
|
|||
default: "cpu-whisperx-development"
|
||||
nvidia-cuda-13: "cuda13-whisperx-development"
|
||||
nvidia-cuda-12: "cuda12-whisperx-development"
|
||||
nvidia-l4t: "nvidia-l4t-arm64-whisperx-development"
|
||||
- !!merge <<: *whisperx
|
||||
name: "cpu-whisperx"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-whisperx"
|
||||
|
|
@ -2199,6 +2801,16 @@
|
|||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-whisperx"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-metal-darwin-arm64-whisperx
|
||||
- !!merge <<: *whisperx
|
||||
name: "nvidia-l4t-arm64-whisperx"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-whisperx"
|
||||
mirrors:
|
||||
- localai/localai-backends:latest-nvidia-l4t-whisperx
|
||||
- !!merge <<: *whisperx
|
||||
name: "nvidia-l4t-arm64-whisperx-development"
|
||||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-whisperx"
|
||||
mirrors:
|
||||
- localai/localai-backends:master-nvidia-l4t-whisperx
|
||||
## coqui
|
||||
|
||||
- !!merge <<: *coqui
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
--extra-index-url https://download.pytorch.org/whl/rocm6.4
|
||||
torch==2.8.0+rocm6.4
|
||||
--extra-index-url https://download.pytorch.org/whl/rocm7.0
|
||||
torch==2.10.0+rocm7.0
|
||||
torchaudio
|
||||
torchvision
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
--extra-index-url https://download.pytorch.org/whl/rocm6.4
|
||||
torch==2.9.1+rocm6.4
|
||||
torchaudio==2.9.1+rocm6.4
|
||||
--extra-index-url https://download.pytorch.org/whl/rocm7.0
|
||||
torch==2.10.0+rocm7.0
|
||||
torchaudio==2.10.0+rocm7.0
|
||||
transformers
|
||||
numpy>=1.24.0,<1.26.0
|
||||
# https://github.com/mudler/LocalAI/pull/6240#issuecomment-3329518289
|
||||
|
|
|
|||
|
|
@ -344,7 +344,16 @@ function ensureVenv() {
|
|||
|
||||
if [ ! -d "${EDIR}/venv" ]; then
|
||||
if [ "x${USE_PIP}" == "xtrue" ]; then
|
||||
"${interpreter}" -m venv --copies "${EDIR}/venv"
|
||||
# --copies is only needed when we will later relocate the venv via
|
||||
# _makeVenvPortable (PORTABLE_PYTHON=true). Some Python builds —
|
||||
# notably macOS system Python — refuse to create a venv with
|
||||
# --copies because the build doesn't support it. Fall back to
|
||||
# symlinks in that case.
|
||||
local venv_args=""
|
||||
if [ "x${PORTABLE_PYTHON}" == "xtrue" ]; then
|
||||
venv_args="--copies"
|
||||
fi
|
||||
"${interpreter}" -m venv ${venv_args} "${EDIR}/venv"
|
||||
source "${EDIR}/venv/bin/activate"
|
||||
"${interpreter}" -m pip install --upgrade pip
|
||||
else
|
||||
|
|
|
|||
100
backend/python/common/mlx_utils.py
Normal file
100
backend/python/common/mlx_utils.py
Normal file
|
|
@ -0,0 +1,100 @@
|
|||
"""Shared utilities for the mlx and mlx-vlm gRPC backends.
|
||||
|
||||
These helpers wrap mlx-lm's and mlx-vlm's native tool-parser modules, which
|
||||
auto-detect the right parser from the model's chat template. Each tool
|
||||
module exposes ``tool_call_start``, ``tool_call_end`` and
|
||||
``parse_tool_call(text, tools) -> dict | list[dict]``.
|
||||
|
||||
The split-reasoning helper is generic enough to work with any think-start /
|
||||
think-end delimiter pair.
|
||||
"""
|
||||
import json
|
||||
import re
|
||||
import sys
|
||||
import uuid
|
||||
|
||||
|
||||
def split_reasoning(text, think_start, think_end):
|
||||
"""Split ``<think>...</think>`` blocks out of ``text``.
|
||||
|
||||
Returns ``(reasoning_content, remaining_text)``. When ``think_start`` is
|
||||
empty or not found, returns ``("", text)`` unchanged.
|
||||
"""
|
||||
if not think_start or not text or think_start not in text:
|
||||
return "", text
|
||||
pattern = re.compile(
|
||||
re.escape(think_start) + r"(.*?)" + re.escape(think_end or ""),
|
||||
re.DOTALL,
|
||||
)
|
||||
reasoning_parts = pattern.findall(text)
|
||||
if not reasoning_parts:
|
||||
return "", text
|
||||
remaining = pattern.sub("", text).strip()
|
||||
return "\n".join(p.strip() for p in reasoning_parts), remaining
|
||||
|
||||
|
||||
def parse_tool_calls(text, tool_module, tools):
|
||||
"""Extract tool calls from ``text`` using a mlx-lm tool module.
|
||||
|
||||
Ports the ``process_tool_calls`` logic from
|
||||
``mlx_vlm/server.py`` (v0.10 onwards). ``tool_module`` must expose
|
||||
``tool_call_start``, ``tool_call_end`` and ``parse_tool_call``.
|
||||
|
||||
Returns ``(calls, remaining_text)`` where ``calls`` is a list of dicts:
|
||||
|
||||
[{"index": int, "id": str, "name": str, "arguments": str (JSON)}]
|
||||
|
||||
and ``remaining_text`` is the free-form text with the tool call blocks
|
||||
removed. ``(calls, text)`` is returned unchanged if ``tool_module`` is
|
||||
``None`` or the start delimiter isn't present.
|
||||
"""
|
||||
if tool_module is None or not text:
|
||||
return [], text
|
||||
start = getattr(tool_module, "tool_call_start", None)
|
||||
end = getattr(tool_module, "tool_call_end", None)
|
||||
parse_fn = getattr(tool_module, "parse_tool_call", None)
|
||||
if not start or parse_fn is None or start not in text:
|
||||
return [], text
|
||||
|
||||
if end == "" or end is None:
|
||||
pattern = re.compile(
|
||||
re.escape(start) + r".*?(?:\n|$)",
|
||||
re.DOTALL,
|
||||
)
|
||||
else:
|
||||
pattern = re.compile(
|
||||
re.escape(start) + r".*?" + re.escape(end),
|
||||
re.DOTALL,
|
||||
)
|
||||
|
||||
matches = pattern.findall(text)
|
||||
if not matches:
|
||||
return [], text
|
||||
|
||||
remaining = pattern.sub(" ", text).strip()
|
||||
calls = []
|
||||
for match in matches:
|
||||
call_body = match.strip().removeprefix(start)
|
||||
if end:
|
||||
call_body = call_body.removesuffix(end)
|
||||
call_body = call_body.strip()
|
||||
try:
|
||||
parsed = parse_fn(call_body, tools)
|
||||
except Exception as e:
|
||||
print(
|
||||
f"[mlx_utils] Invalid tool call: {call_body!r} ({e})",
|
||||
file=sys.stderr,
|
||||
)
|
||||
continue
|
||||
if not isinstance(parsed, list):
|
||||
parsed = [parsed]
|
||||
for tc in parsed:
|
||||
calls.append(
|
||||
{
|
||||
"index": len(calls),
|
||||
"id": str(uuid.uuid4()),
|
||||
"name": (tc.get("name") or "").strip(),
|
||||
"arguments": json.dumps(tc.get("arguments", {}), ensure_ascii=False),
|
||||
}
|
||||
)
|
||||
return calls, remaining
|
||||
65
backend/python/common/python_utils.py
Normal file
65
backend/python/common/python_utils.py
Normal file
|
|
@ -0,0 +1,65 @@
|
|||
"""Generic utilities shared across Python gRPC backends.
|
||||
|
||||
These helpers don't depend on any specific inference framework and can be
|
||||
imported by any backend that needs to parse LocalAI gRPC options or build a
|
||||
chat-template-compatible message list from proto Message objects.
|
||||
"""
|
||||
import json
|
||||
|
||||
|
||||
def parse_options(options_list):
|
||||
"""Parse Options[] list of ``key:value`` strings into a dict.
|
||||
|
||||
Supports type inference for common cases (bool, int, float). Unknown or
|
||||
mixed-case values are returned as strings.
|
||||
|
||||
Used by LoadModel to extract backend-specific options passed via
|
||||
``ModelOptions.Options`` in ``backend.proto``.
|
||||
"""
|
||||
opts = {}
|
||||
for opt in options_list:
|
||||
if ":" not in opt:
|
||||
continue
|
||||
key, value = opt.split(":", 1)
|
||||
key = key.strip()
|
||||
value = value.strip()
|
||||
# Try type conversion
|
||||
if value.lower() in ("true", "false"):
|
||||
opts[key] = value.lower() == "true"
|
||||
else:
|
||||
try:
|
||||
opts[key] = int(value)
|
||||
except ValueError:
|
||||
try:
|
||||
opts[key] = float(value)
|
||||
except ValueError:
|
||||
opts[key] = value
|
||||
return opts
|
||||
|
||||
|
||||
def messages_to_dicts(proto_messages):
|
||||
"""Convert proto ``Message`` objects to dicts suitable for ``apply_chat_template``.
|
||||
|
||||
Handles: ``role``, ``content``, ``name``, ``tool_call_id``,
|
||||
``reasoning_content``, ``tool_calls`` (JSON string → Python list).
|
||||
|
||||
HuggingFace chat templates (and their MLX/vLLM wrappers) expect a list of
|
||||
plain dicts — proto Message objects don't work directly with Jinja, so
|
||||
this conversion is needed before every ``apply_chat_template`` call.
|
||||
"""
|
||||
result = []
|
||||
for msg in proto_messages:
|
||||
d = {"role": msg.role, "content": msg.content or ""}
|
||||
if msg.name:
|
||||
d["name"] = msg.name
|
||||
if msg.tool_call_id:
|
||||
d["tool_call_id"] = msg.tool_call_id
|
||||
if msg.reasoning_content:
|
||||
d["reasoning_content"] = msg.reasoning_content
|
||||
if msg.tool_calls:
|
||||
try:
|
||||
d["tool_calls"] = json.loads(msg.tool_calls)
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
result.append(d)
|
||||
return result
|
||||
|
|
@ -1,2 +1,2 @@
|
|||
--extra-index-url https://download.pytorch.org/whl/rocm6.4
|
||||
--extra-index-url https://download.pytorch.org/whl/rocm7.0
|
||||
torch
|
||||
43
backend/python/common/vllm_utils.py
Normal file
43
backend/python/common/vllm_utils.py
Normal file
|
|
@ -0,0 +1,43 @@
|
|||
"""vLLM-specific helpers for the vllm and vllm-omni gRPC backends.
|
||||
|
||||
Generic helpers (``parse_options``, ``messages_to_dicts``) live in
|
||||
``python_utils`` and are re-exported here for backwards compatibility with
|
||||
existing imports in both backends.
|
||||
"""
|
||||
import sys
|
||||
|
||||
from python_utils import messages_to_dicts, parse_options
|
||||
|
||||
__all__ = ["parse_options", "messages_to_dicts", "setup_parsers"]
|
||||
|
||||
|
||||
def setup_parsers(opts):
|
||||
"""Return ``(tool_parser_cls, reasoning_parser_cls)`` from an opts dict.
|
||||
|
||||
Uses vLLM's native ``ToolParserManager`` / ``ReasoningParserManager``.
|
||||
Returns ``(None, None)`` if vLLM isn't installed or the requested
|
||||
parser name can't be resolved.
|
||||
"""
|
||||
tool_parser_cls = None
|
||||
reasoning_parser_cls = None
|
||||
|
||||
tool_parser_name = opts.get("tool_parser")
|
||||
reasoning_parser_name = opts.get("reasoning_parser")
|
||||
|
||||
if tool_parser_name:
|
||||
try:
|
||||
from vllm.tool_parsers import ToolParserManager
|
||||
tool_parser_cls = ToolParserManager.get_tool_parser(tool_parser_name)
|
||||
print(f"[vllm_utils] Loaded tool_parser: {tool_parser_name}", file=sys.stderr)
|
||||
except Exception as e:
|
||||
print(f"[vllm_utils] Failed to load tool_parser {tool_parser_name}: {e}", file=sys.stderr)
|
||||
|
||||
if reasoning_parser_name:
|
||||
try:
|
||||
from vllm.reasoning import ReasoningParserManager
|
||||
reasoning_parser_cls = ReasoningParserManager.get_reasoning_parser(reasoning_parser_name)
|
||||
print(f"[vllm_utils] Loaded reasoning_parser: {reasoning_parser_name}", file=sys.stderr)
|
||||
except Exception as e:
|
||||
print(f"[vllm_utils] Failed to load reasoning_parser {reasoning_parser_name}: {e}", file=sys.stderr)
|
||||
|
||||
return tool_parser_cls, reasoning_parser_cls
|
||||
|
|
@ -1,6 +1,6 @@
|
|||
--extra-index-url https://download.pytorch.org/whl/rocm6.4
|
||||
torch==2.8.0+rocm6.4
|
||||
torchaudio==2.8.0+rocm6.4
|
||||
--extra-index-url https://download.pytorch.org/whl/rocm7.0
|
||||
torch==2.10.0+rocm7.0
|
||||
torchaudio==2.10.0+rocm7.0
|
||||
transformers==4.48.3
|
||||
accelerate
|
||||
coqui-tts
|
||||
|
|
@ -1,6 +1,6 @@
|
|||
--extra-index-url https://download.pytorch.org/whl/rocm6.4
|
||||
torch==2.8.0+rocm6.4
|
||||
torchvision==0.23.0+rocm6.4
|
||||
--extra-index-url https://download.pytorch.org/whl/rocm7.0
|
||||
torch==2.10.0+rocm7.0
|
||||
torchvision==0.25.0+rocm7.0
|
||||
git+https://github.com/huggingface/diffusers
|
||||
opencv-python
|
||||
transformers
|
||||
|
|
|
|||
|
|
@ -16,4 +16,14 @@ if [ "x${BUILD_PROFILE}" == "xintel" ]; then
|
|||
EXTRA_PIP_INSTALL_FLAGS+=" --upgrade --index-strategy=unsafe-first-match"
|
||||
fi
|
||||
|
||||
if [ "x${BUILD_PROFILE}" == "xl4t13" ]; then
|
||||
PYTHON_VERSION="3.12"
|
||||
PYTHON_PATCH="12"
|
||||
PY_STANDALONE_TAG="20251120"
|
||||
fi
|
||||
|
||||
if [ "x${BUILD_PROFILE}" == "xl4t12" ]; then
|
||||
USE_PIP=true
|
||||
fi
|
||||
|
||||
installRequirements
|
||||
|
|
|
|||
|
|
@ -1,3 +1,3 @@
|
|||
--extra-index-url https://download.pytorch.org/whl/rocm6.4
|
||||
--extra-index-url https://download.pytorch.org/whl/rocm7.0
|
||||
torch
|
||||
faster-whisper
|
||||
3
backend/python/faster-whisper/requirements-l4t12.txt
Normal file
3
backend/python/faster-whisper/requirements-l4t12.txt
Normal file
|
|
@ -0,0 +1,3 @@
|
|||
--extra-index-url https://pypi.jetson-ai-lab.io/jp6/cu129/
|
||||
torch
|
||||
faster-whisper
|
||||
3
backend/python/faster-whisper/requirements-l4t13.txt
Normal file
3
backend/python/faster-whisper/requirements-l4t13.txt
Normal file
|
|
@ -0,0 +1,3 @@
|
|||
--extra-index-url https://download.pytorch.org/whl/cu130
|
||||
torch
|
||||
faster-whisper
|
||||
|
|
@ -1,3 +1,3 @@
|
|||
--extra-index-url https://download.pytorch.org/whl/rocm6.3
|
||||
torch==2.7.1+rocm6.3
|
||||
torchaudio==2.7.1+rocm6.3
|
||||
--extra-index-url https://download.pytorch.org/whl/rocm7.0
|
||||
torch==2.10.0+rocm7.0
|
||||
torchaudio==2.10.0+rocm7.0
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
--extra-index-url https://download.pytorch.org/whl/rocm6.4
|
||||
torch==2.8.0+rocm6.4
|
||||
torchaudio==2.8.0+rocm6.4
|
||||
--extra-index-url https://download.pytorch.org/whl/rocm7.0
|
||||
torch==2.10.0+rocm7.0
|
||||
torchaudio==2.10.0+rocm7.0
|
||||
transformers
|
||||
accelerate
|
||||
kokoro
|
||||
|
|
|
|||
|
|
@ -15,17 +15,21 @@ Two startup modes:
|
|||
import asyncio
|
||||
from concurrent import futures
|
||||
import argparse
|
||||
import gc
|
||||
import json
|
||||
import os
|
||||
import signal
|
||||
import sys
|
||||
import tempfile
|
||||
import types
|
||||
from typing import List
|
||||
|
||||
import grpc
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'common'))
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'common'))
|
||||
from grpc_auth import get_auth_interceptors
|
||||
from python_utils import messages_to_dicts, parse_options as _shared_parse_options
|
||||
from mlx_utils import parse_tool_calls, split_reasoning
|
||||
|
||||
|
||||
import backend_pb2
|
||||
|
|
@ -62,37 +66,10 @@ def mlx_distributed_init(rank, hostfile, backend="ring", coordinator=None):
|
|||
raise ValueError(f"Unknown backend: {backend}")
|
||||
|
||||
|
||||
def is_float(s):
|
||||
try:
|
||||
float(s)
|
||||
return True
|
||||
except ValueError:
|
||||
return False
|
||||
|
||||
|
||||
def is_int(s):
|
||||
try:
|
||||
int(s)
|
||||
return True
|
||||
except ValueError:
|
||||
return False
|
||||
|
||||
|
||||
def parse_options(options):
|
||||
"""Parse key:value option strings into a dict."""
|
||||
result = {}
|
||||
for opt in options:
|
||||
if ":" not in opt:
|
||||
continue
|
||||
key, value = opt.split(":", 1)
|
||||
if is_float(value):
|
||||
value = float(value)
|
||||
elif is_int(value):
|
||||
value = int(value)
|
||||
elif value.lower() in ["true", "false"]:
|
||||
value = value.lower() == "true"
|
||||
result[key] = value
|
||||
return result
|
||||
# Re-export the shared helper under the local name for back-compat with
|
||||
# any callers (and the existing distributed worker tests) that imported
|
||||
# parse_options directly from this module.
|
||||
parse_options = _shared_parse_options
|
||||
|
||||
|
||||
class BackendServicer(backend_pb2_grpc.BackendServicer):
|
||||
|
|
@ -188,6 +165,20 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
)
|
||||
print("[Rank 0] Model loaded (single-node with prompt cache)", file=sys.stderr)
|
||||
|
||||
# Log auto-detected TokenizerWrapper capabilities. Same shape
|
||||
# as the mlx backend: has_tool_calling / has_thinking from
|
||||
# mlx_lm.tokenizer_utils + the start/end markers it sniffed
|
||||
# from the chat template / vocab.
|
||||
has_tools = bool(getattr(self.tokenizer, "has_tool_calling", False))
|
||||
has_thinking = bool(getattr(self.tokenizer, "has_thinking", False))
|
||||
tcs = getattr(self.tokenizer, "tool_call_start", None)
|
||||
tce = getattr(self.tokenizer, "tool_call_end", None)
|
||||
print(
|
||||
f"[Rank 0] Tokenizer capabilities: has_tool_calling={has_tools} "
|
||||
f"has_thinking={has_thinking} tool_call_start={tcs!r} tool_call_end={tce!r}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
|
||||
except Exception as err:
|
||||
print(f"[Rank 0] Error loading model: {err}", file=sys.stderr)
|
||||
return backend_pb2.Result(success=False, message=f"Error loading model: {err}")
|
||||
|
|
@ -201,7 +192,7 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
try:
|
||||
import mlx.core as mx
|
||||
from mlx_lm import stream_generate
|
||||
from mlx_lm.sample_utils import make_sampler
|
||||
from mlx_lm.sample_utils import make_logits_processors, make_sampler
|
||||
|
||||
prompt_text = self._prepare_prompt(request)
|
||||
tokens = self._get_tokens_from_prompt(prompt_text)
|
||||
|
|
@ -211,7 +202,7 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
self.coordinator.broadcast_command(CMD_GENERATE, len(tokens))
|
||||
self.coordinator.broadcast_tokens(tokens)
|
||||
|
||||
max_tokens, sampler_params = self._build_generation_params(request)
|
||||
max_tokens, sampler_params, logits_params, stop_words = self._build_generation_params(request)
|
||||
|
||||
if self.coordinator:
|
||||
gen_params = self.coordinator.broadcast_generation_params(
|
||||
|
|
@ -222,6 +213,7 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
max_tokens = gen_params["max_tokens"]
|
||||
|
||||
sampler = make_sampler(**sampler_params)
|
||||
logits_processors = make_logits_processors(**logits_params) if logits_params else None
|
||||
|
||||
# Use prompt cache in single-node mode
|
||||
gen_kwargs = {}
|
||||
|
|
@ -238,22 +230,44 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
tokens = remaining_tokens if remaining_tokens else cache_key
|
||||
|
||||
generated = []
|
||||
last_response = None
|
||||
for response in stream_generate(
|
||||
self.model,
|
||||
self.tokenizer,
|
||||
prompt=tokens,
|
||||
max_tokens=max_tokens,
|
||||
sampler=sampler,
|
||||
logits_processors=logits_processors,
|
||||
**gen_kwargs,
|
||||
):
|
||||
generated.append(response.text)
|
||||
last_response = response
|
||||
if cache_key is not None:
|
||||
cache_key.append(response.token)
|
||||
if stop_words and any(s in "".join(generated) for s in stop_words):
|
||||
break
|
||||
|
||||
if self.lru_cache is not None and cache_key is not None:
|
||||
self.lru_cache.insert_cache(self.model_key, cache_key, prompt_cache)
|
||||
|
||||
return backend_pb2.Reply(message=bytes(''.join(generated), encoding='utf-8'))
|
||||
full_text = self._truncate_at_stop("".join(generated), stop_words)
|
||||
content, reasoning_content, tool_calls_proto, prompt_tokens, completion_tokens, logprobs_bytes = (
|
||||
self._finalize_output(request, full_text, last_response)
|
||||
)
|
||||
|
||||
return backend_pb2.Reply(
|
||||
message=bytes(content, encoding='utf-8'),
|
||||
prompt_tokens=prompt_tokens,
|
||||
tokens=completion_tokens,
|
||||
logprobs=logprobs_bytes,
|
||||
chat_deltas=[
|
||||
backend_pb2.ChatDelta(
|
||||
content=content,
|
||||
reasoning_content=reasoning_content,
|
||||
tool_calls=tool_calls_proto,
|
||||
)
|
||||
],
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
print(f"[Rank 0] Error in Predict: {e}", file=sys.stderr)
|
||||
|
|
@ -268,7 +282,7 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
try:
|
||||
import mlx.core as mx
|
||||
from mlx_lm import stream_generate
|
||||
from mlx_lm.sample_utils import make_sampler
|
||||
from mlx_lm.sample_utils import make_logits_processors, make_sampler
|
||||
|
||||
prompt_text = self._prepare_prompt(request)
|
||||
tokens = self._get_tokens_from_prompt(prompt_text)
|
||||
|
|
@ -278,7 +292,9 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
self.coordinator.broadcast_command(CMD_GENERATE, len(tokens))
|
||||
self.coordinator.broadcast_tokens(tokens)
|
||||
|
||||
max_tokens, sampler_params = self._build_generation_params(request, default_max_tokens=512)
|
||||
max_tokens, sampler_params, logits_params, stop_words = self._build_generation_params(
|
||||
request, default_max_tokens=512
|
||||
)
|
||||
|
||||
if self.coordinator:
|
||||
gen_params = self.coordinator.broadcast_generation_params(
|
||||
|
|
@ -289,6 +305,7 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
max_tokens = gen_params["max_tokens"]
|
||||
|
||||
sampler = make_sampler(**sampler_params)
|
||||
logits_processors = make_logits_processors(**logits_params) if logits_params else None
|
||||
|
||||
# Use prompt cache in single-node mode
|
||||
gen_kwargs = {}
|
||||
|
|
@ -304,17 +321,45 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
gen_kwargs['prompt_cache'] = prompt_cache
|
||||
tokens = remaining_tokens if remaining_tokens else cache_key
|
||||
|
||||
accumulated = []
|
||||
last_response = None
|
||||
for response in stream_generate(
|
||||
self.model,
|
||||
self.tokenizer,
|
||||
prompt=tokens,
|
||||
max_tokens=max_tokens,
|
||||
sampler=sampler,
|
||||
logits_processors=logits_processors,
|
||||
**gen_kwargs,
|
||||
):
|
||||
if cache_key is not None:
|
||||
cache_key.append(response.token)
|
||||
yield backend_pb2.Reply(message=bytes(response.text, encoding='utf-8'))
|
||||
accumulated.append(response.text)
|
||||
last_response = response
|
||||
yield backend_pb2.Reply(
|
||||
message=bytes(response.text, encoding='utf-8'),
|
||||
chat_deltas=[backend_pb2.ChatDelta(content=response.text)],
|
||||
)
|
||||
if stop_words and any(s in "".join(accumulated) for s in stop_words):
|
||||
break
|
||||
|
||||
full_text = self._truncate_at_stop("".join(accumulated), stop_words)
|
||||
content, reasoning_content, tool_calls_proto, prompt_tokens, completion_tokens, logprobs_bytes = (
|
||||
self._finalize_output(request, full_text, last_response)
|
||||
)
|
||||
yield backend_pb2.Reply(
|
||||
message=b"",
|
||||
prompt_tokens=prompt_tokens,
|
||||
tokens=completion_tokens,
|
||||
logprobs=logprobs_bytes,
|
||||
chat_deltas=[
|
||||
backend_pb2.ChatDelta(
|
||||
content="",
|
||||
reasoning_content=reasoning_content,
|
||||
tool_calls=tool_calls_proto,
|
||||
)
|
||||
],
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
print(f"[Rank 0] Error in PredictStream: {e}", file=sys.stderr)
|
||||
|
|
@ -335,12 +380,74 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
context.set_details("Embeddings are not supported in the MLX distributed backend.")
|
||||
return backend_pb2.EmbeddingResult()
|
||||
|
||||
async def TokenizeString(self, request, context):
|
||||
if not hasattr(self, "tokenizer") or self.tokenizer is None:
|
||||
context.set_code(grpc.StatusCode.FAILED_PRECONDITION)
|
||||
context.set_details("tokenizer not loaded")
|
||||
return backend_pb2.TokenizationResponse()
|
||||
try:
|
||||
tokens = self.tokenizer.encode(request.Prompt)
|
||||
if hasattr(tokens, "tolist"):
|
||||
tokens = tokens.tolist()
|
||||
tokens = list(tokens)
|
||||
return backend_pb2.TokenizationResponse(length=len(tokens), tokens=tokens)
|
||||
except Exception as e:
|
||||
context.set_code(grpc.StatusCode.INTERNAL)
|
||||
context.set_details(str(e))
|
||||
return backend_pb2.TokenizationResponse()
|
||||
|
||||
async def Free(self, request, context):
|
||||
try:
|
||||
# If we're rank 0 of a distributed run, tell workers to shut
|
||||
# down their per-request loops first so they release the model.
|
||||
if self.coordinator is not None:
|
||||
try:
|
||||
from coordinator import CMD_SHUTDOWN
|
||||
self.coordinator.broadcast_command(CMD_SHUTDOWN)
|
||||
except Exception as e:
|
||||
print(f"[Rank 0] failed to broadcast shutdown: {e}", file=sys.stderr)
|
||||
if hasattr(self, "model"):
|
||||
del self.model
|
||||
if hasattr(self, "tokenizer"):
|
||||
del self.tokenizer
|
||||
if self.lru_cache is not None:
|
||||
try:
|
||||
self.lru_cache.clear()
|
||||
except Exception:
|
||||
pass
|
||||
self.lru_cache = None
|
||||
self.coordinator = None
|
||||
self.group = None
|
||||
gc.collect()
|
||||
try:
|
||||
import mlx.core as mx # type: ignore
|
||||
if hasattr(mx, "clear_cache"):
|
||||
mx.clear_cache()
|
||||
elif hasattr(mx, "metal") and hasattr(mx.metal, "clear_cache"):
|
||||
mx.metal.clear_cache()
|
||||
except Exception:
|
||||
pass
|
||||
return backend_pb2.Result(success=True, message="MLX distributed model freed")
|
||||
except Exception as e:
|
||||
return backend_pb2.Result(success=False, message=str(e))
|
||||
|
||||
def _prepare_prompt(self, request):
|
||||
if not request.Prompt and request.UseTokenizerTemplate and request.Messages:
|
||||
messages = [{"role": msg.role, "content": msg.content} for msg in request.Messages]
|
||||
return self.tokenizer.apply_chat_template(
|
||||
messages, tokenize=False, add_generation_prompt=True
|
||||
)
|
||||
messages = messages_to_dicts(request.Messages)
|
||||
kwargs = {"tokenize": False, "add_generation_prompt": True}
|
||||
if request.Tools:
|
||||
try:
|
||||
kwargs["tools"] = json.loads(request.Tools)
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
if request.Metadata.get("enable_thinking", "").lower() == "true":
|
||||
kwargs["enable_thinking"] = True
|
||||
try:
|
||||
return self.tokenizer.apply_chat_template(messages, **kwargs)
|
||||
except TypeError:
|
||||
return self.tokenizer.apply_chat_template(
|
||||
messages, tokenize=False, add_generation_prompt=True
|
||||
)
|
||||
return request.Prompt
|
||||
|
||||
def _get_tokens_from_prompt(self, prompt_text: str) -> List[int]:
|
||||
|
|
@ -349,6 +456,82 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
return tokens.tolist()
|
||||
return list(tokens)
|
||||
|
||||
def _tool_module_from_tokenizer(self):
|
||||
"""Same shim as the mlx backend: fall back to json.loads when the
|
||||
installed mlx-lm doesn't expose a tool_parser callable on the
|
||||
wrapper (true on 0.29.x — only HEAD ships parsers)."""
|
||||
start = getattr(self.tokenizer, "tool_call_start", None)
|
||||
end = getattr(self.tokenizer, "tool_call_end", None)
|
||||
if not start:
|
||||
return None
|
||||
parse_fn = getattr(self.tokenizer, "tool_parser", None)
|
||||
if parse_fn is None:
|
||||
def parse_fn(body, tools): # noqa: E306
|
||||
return json.loads(body.strip())
|
||||
return types.SimpleNamespace(
|
||||
tool_call_start=start,
|
||||
tool_call_end=end or "",
|
||||
parse_tool_call=parse_fn,
|
||||
)
|
||||
|
||||
def _truncate_at_stop(self, text, stop_words):
|
||||
if not stop_words:
|
||||
return text
|
||||
earliest = len(text)
|
||||
for stop in stop_words:
|
||||
if not stop:
|
||||
continue
|
||||
idx = text.find(stop)
|
||||
if idx >= 0 and idx < earliest:
|
||||
earliest = idx
|
||||
return text[:earliest] if earliest < len(text) else text
|
||||
|
||||
def _finalize_output(self, request, generated_text, last_response):
|
||||
content = generated_text
|
||||
reasoning_content = ""
|
||||
if getattr(self.tokenizer, "has_thinking", False):
|
||||
think_start = getattr(self.tokenizer, "think_start", "") or ""
|
||||
think_end = getattr(self.tokenizer, "think_end", "") or ""
|
||||
reasoning_content, content = split_reasoning(content, think_start, think_end)
|
||||
|
||||
tool_calls_proto: List[backend_pb2.ToolCallDelta] = []
|
||||
tool_module = None
|
||||
if getattr(self.tokenizer, "has_tool_calling", False):
|
||||
tool_module = self._tool_module_from_tokenizer()
|
||||
if tool_module is not None:
|
||||
parsed_tools = None
|
||||
if request.Tools:
|
||||
try:
|
||||
parsed_tools = json.loads(request.Tools)
|
||||
except json.JSONDecodeError:
|
||||
parsed_tools = None
|
||||
calls, content = parse_tool_calls(content, tool_module, parsed_tools)
|
||||
for c in calls:
|
||||
tool_calls_proto.append(
|
||||
backend_pb2.ToolCallDelta(
|
||||
index=c["index"], id=c["id"], name=c["name"], arguments=c["arguments"],
|
||||
)
|
||||
)
|
||||
|
||||
prompt_token_count = int(getattr(last_response, "prompt_tokens", 0) or 0) if last_response else 0
|
||||
completion_token_count = int(getattr(last_response, "generation_tokens", 0) or 0) if last_response else 0
|
||||
|
||||
logprobs_bytes = b""
|
||||
if last_response is not None and int(getattr(request, "Logprobs", 0) or 0) > 0:
|
||||
try:
|
||||
lp = getattr(last_response, "logprobs", None)
|
||||
if lp is not None:
|
||||
token_id = int(getattr(last_response, "token", 0) or 0)
|
||||
token_text = self.tokenizer.decode([token_id]) if token_id else ""
|
||||
top_logprob = float(lp[token_id]) if hasattr(lp, "__getitem__") else 0.0
|
||||
logprobs_bytes = json.dumps(
|
||||
{"content": [{"token": token_text, "logprob": top_logprob}]}
|
||||
).encode("utf-8")
|
||||
except Exception as e:
|
||||
print(f"[Rank 0] Logprobs extraction failed: {e}", file=sys.stderr)
|
||||
|
||||
return content, reasoning_content, tool_calls_proto, prompt_token_count, completion_token_count, logprobs_bytes
|
||||
|
||||
def _build_generation_params(self, request, default_max_tokens=200):
|
||||
import mlx.core as mx
|
||||
|
||||
|
|
@ -373,6 +556,22 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
'xtc_probability': 0.0,
|
||||
}
|
||||
|
||||
# Logits processor parameters — pulled from the request and
|
||||
# forwarded to make_logits_processors. Rank 0 is the only rank
|
||||
# running the sampler so we don't need to broadcast these to
|
||||
# workers (workers participate in the pipeline-parallel forward
|
||||
# pass only).
|
||||
logits_params = {}
|
||||
repetition_penalty = getattr(request, 'RepetitionPenalty', 0.0) or 0.0
|
||||
if repetition_penalty and repetition_penalty != 1.0:
|
||||
logits_params['repetition_penalty'] = repetition_penalty
|
||||
presence_penalty = getattr(request, 'PresencePenalty', 0.0) or 0.0
|
||||
if presence_penalty:
|
||||
logits_params['presence_penalty'] = presence_penalty
|
||||
frequency_penalty = getattr(request, 'FrequencyPenalty', 0.0) or 0.0
|
||||
if frequency_penalty:
|
||||
logits_params['frequency_penalty'] = frequency_penalty
|
||||
|
||||
seed = getattr(request, 'Seed', 0)
|
||||
if seed != 0:
|
||||
mx.random.seed(seed)
|
||||
|
|
@ -392,9 +591,15 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
for opt_key, param_key in option_mapping.items():
|
||||
if opt_key in self.options:
|
||||
sampler_params[param_key] = self.options[opt_key]
|
||||
for opt_key in ('repetition_penalty', 'presence_penalty', 'frequency_penalty'):
|
||||
if opt_key in self.options:
|
||||
logits_params[opt_key] = self.options[opt_key]
|
||||
if 'seed' in self.options:
|
||||
mx.random.seed(self.options['seed'])
|
||||
|
||||
stop_words = list(getattr(request, 'StopPrompts', []) or [])
|
||||
return max_tokens, sampler_params, logits_params, stop_words
|
||||
|
||||
# XTC special tokens
|
||||
xtc_special_tokens = []
|
||||
if hasattr(self.tokenizer, 'eos_token_ids') and self.tokenizer.eos_token_ids:
|
||||
|
|
|
|||
|
|
@ -1,3 +1,6 @@
|
|||
import os
|
||||
import sys
|
||||
import types
|
||||
import unittest
|
||||
import subprocess
|
||||
import time
|
||||
|
|
@ -6,6 +9,12 @@ import grpc
|
|||
import backend_pb2
|
||||
import backend_pb2_grpc
|
||||
|
||||
# Make the shared helpers importable so we can unit-test them without a
|
||||
# running gRPC server.
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'common'))
|
||||
from python_utils import messages_to_dicts, parse_options
|
||||
from mlx_utils import parse_tool_calls, split_reasoning
|
||||
|
||||
|
||||
class TestBackendServicer(unittest.TestCase):
|
||||
def setUp(self):
|
||||
|
|
@ -85,3 +94,44 @@ class TestBackendServicer(unittest.TestCase):
|
|||
self.fail("sampling params service failed")
|
||||
finally:
|
||||
self.tearDown()
|
||||
|
||||
|
||||
class TestSharedHelpers(unittest.TestCase):
|
||||
"""Server-less unit tests for the helpers the mlx-distributed backend depends on."""
|
||||
|
||||
def test_parse_options_typed(self):
|
||||
opts = parse_options(["temperature:0.7", "max_tokens:128", "trust:true"])
|
||||
self.assertEqual(opts["temperature"], 0.7)
|
||||
self.assertEqual(opts["max_tokens"], 128)
|
||||
self.assertIs(opts["trust"], True)
|
||||
|
||||
def test_messages_to_dicts_roundtrip(self):
|
||||
msgs = [
|
||||
backend_pb2.Message(role="user", content="hi"),
|
||||
backend_pb2.Message(
|
||||
role="assistant",
|
||||
content="",
|
||||
tool_calls='[{"id":"call_1","type":"function","function":{"name":"f","arguments":"{}"}}]',
|
||||
),
|
||||
backend_pb2.Message(role="tool", content="42", tool_call_id="call_1", name="f"),
|
||||
]
|
||||
out = messages_to_dicts(msgs)
|
||||
self.assertEqual(out[0], {"role": "user", "content": "hi"})
|
||||
self.assertEqual(out[1]["tool_calls"][0]["function"]["name"], "f")
|
||||
self.assertEqual(out[2]["tool_call_id"], "call_1")
|
||||
|
||||
def test_split_reasoning(self):
|
||||
r, c = split_reasoning("<think>plan</think>final", "<think>", "</think>")
|
||||
self.assertEqual(r, "plan")
|
||||
self.assertEqual(c, "final")
|
||||
|
||||
def test_parse_tool_calls_with_shim(self):
|
||||
tm = types.SimpleNamespace(
|
||||
tool_call_start="<tool_call>",
|
||||
tool_call_end="</tool_call>",
|
||||
parse_tool_call=lambda body, tools: {"name": "get_weather", "arguments": {"location": body.strip()}},
|
||||
)
|
||||
calls, remaining = parse_tool_calls("<tool_call>Paris</tool_call>", tm, tools=None)
|
||||
self.assertEqual(len(calls), 1)
|
||||
self.assertEqual(calls[0]["name"], "get_weather")
|
||||
self.assertEqual(calls[0]["arguments"], '{"location": "Paris"}')
|
||||
|
|
|
|||
|
|
@ -2,11 +2,14 @@
|
|||
import asyncio
|
||||
from concurrent import futures
|
||||
import argparse
|
||||
import gc
|
||||
import json
|
||||
import signal
|
||||
import sys
|
||||
import os
|
||||
import tempfile
|
||||
import types
|
||||
from typing import List
|
||||
import time
|
||||
|
||||
import backend_pb2
|
||||
import backend_pb2_grpc
|
||||
|
|
@ -15,30 +18,18 @@ import grpc
|
|||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'common'))
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'common'))
|
||||
from grpc_auth import get_auth_interceptors
|
||||
from python_utils import messages_to_dicts, parse_options
|
||||
from mlx_utils import parse_tool_calls, split_reasoning
|
||||
|
||||
from mlx_vlm import load, generate, stream_generate
|
||||
from mlx_vlm import load, stream_generate
|
||||
from mlx_vlm.prompt_utils import apply_chat_template
|
||||
from mlx_vlm.utils import load_config, load_image
|
||||
from mlx_vlm.tool_parsers import _infer_tool_parser, load_tool_module
|
||||
from mlx_vlm.utils import load_config
|
||||
from mlx_lm.sample_utils import make_logits_processors, make_sampler
|
||||
import mlx.core as mx
|
||||
import base64
|
||||
import io
|
||||
from PIL import Image
|
||||
import tempfile
|
||||
|
||||
def is_float(s):
|
||||
"""Check if a string can be converted to float."""
|
||||
try:
|
||||
float(s)
|
||||
return True
|
||||
except ValueError:
|
||||
return False
|
||||
def is_int(s):
|
||||
"""Check if a string can be converted to int."""
|
||||
try:
|
||||
int(s)
|
||||
return True
|
||||
except ValueError:
|
||||
return False
|
||||
|
||||
_ONE_DAY_IN_SECONDS = 60 * 60 * 24
|
||||
|
||||
|
|
@ -78,36 +69,52 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
try:
|
||||
print(f"Loading MLX-VLM model: {request.Model}", file=sys.stderr)
|
||||
print(f"Request: {request}", file=sys.stderr)
|
||||
|
||||
# Parse options like in the diffusers backend
|
||||
options = request.Options
|
||||
self.options = {}
|
||||
|
||||
# The options are a list of strings in this form optname:optvalue
|
||||
# We store all the options in a dict for later use
|
||||
for opt in options:
|
||||
if ":" not in opt:
|
||||
continue
|
||||
key, value = opt.split(":", 1) # Split only on first colon to handle values with colons
|
||||
|
||||
if is_float(value):
|
||||
value = float(value)
|
||||
elif is_int(value):
|
||||
value = int(value)
|
||||
elif value.lower() in ["true", "false"]:
|
||||
value = value.lower() == "true"
|
||||
|
||||
self.options[key] = value
|
||||
|
||||
|
||||
# Parse Options[] key:value strings into a typed dict
|
||||
self.options = parse_options(request.Options)
|
||||
print(f"Options: {self.options}", file=sys.stderr)
|
||||
|
||||
|
||||
# Load model and processor using MLX-VLM
|
||||
# mlx-vlm load function returns (model, processor) instead of (model, tokenizer)
|
||||
self.model, self.processor = load(request.Model)
|
||||
|
||||
|
||||
# Load model config for chat template support
|
||||
self.config = load_config(request.Model)
|
||||
|
||||
|
||||
# Auto-infer the tool parser from the chat template. mlx-vlm has
|
||||
# its own _infer_tool_parser that falls back to mlx-lm parsers.
|
||||
tokenizer = (
|
||||
self.processor.tokenizer if hasattr(self.processor, "tokenizer") else self.processor
|
||||
)
|
||||
self.tool_module = None
|
||||
if hasattr(tokenizer, "chat_template"):
|
||||
try:
|
||||
parser_type = _infer_tool_parser(tokenizer.chat_template)
|
||||
if parser_type is not None:
|
||||
self.tool_module = load_tool_module(parser_type)
|
||||
print(
|
||||
f"[mlx-vlm] auto-detected tool parser: {parser_type}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
else:
|
||||
print(
|
||||
"[mlx-vlm] no tool parser matched the chat template",
|
||||
file=sys.stderr,
|
||||
)
|
||||
except Exception as e:
|
||||
print(
|
||||
f"[mlx-vlm] failed to load tool parser: {e}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
|
||||
# Reasoning tokens — check if the tokenizer advertises thinking
|
||||
# markers. Fall back to empty strings (split_reasoning no-ops).
|
||||
self.think_start = getattr(tokenizer, "think_start", "") or ""
|
||||
self.think_end = getattr(tokenizer, "think_end", "") or ""
|
||||
self.has_thinking = bool(
|
||||
getattr(tokenizer, "has_thinking", False) or self.think_start
|
||||
)
|
||||
|
||||
except Exception as err:
|
||||
print(f"Error loading MLX-VLM model {err=}, {type(err)=}", file=sys.stderr)
|
||||
return backend_pb2.Result(success=False, message=f"Error loading MLX-VLM model: {err}")
|
||||
|
|
@ -128,63 +135,72 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
"""
|
||||
temp_files = []
|
||||
try:
|
||||
# Process images and audios from request
|
||||
image_paths = []
|
||||
audio_paths = []
|
||||
|
||||
# Process images
|
||||
if request.Images:
|
||||
for img_data in request.Images:
|
||||
img_path = self.load_image_from_base64(img_data)
|
||||
if img_path:
|
||||
image_paths.append(img_path)
|
||||
temp_files.append(img_path)
|
||||
|
||||
# Process audios
|
||||
if request.Audios:
|
||||
for audio_data in request.Audios:
|
||||
audio_path = self.load_audio_from_base64(audio_data)
|
||||
if audio_path:
|
||||
audio_paths.append(audio_path)
|
||||
temp_files.append(audio_path)
|
||||
|
||||
# Prepare the prompt with multimodal information
|
||||
prompt = self._prepare_prompt(request, num_images=len(image_paths), num_audios=len(audio_paths))
|
||||
|
||||
# Build generation parameters using request attributes and options
|
||||
max_tokens, generation_params = self._build_generation_params(request)
|
||||
|
||||
print(f"Generating text with MLX-VLM - max_tokens: {max_tokens}, params: {generation_params}", file=sys.stderr)
|
||||
print(f"Images: {len(image_paths)}, Audios: {len(audio_paths)}", file=sys.stderr)
|
||||
|
||||
# Generate text using MLX-VLM with multimodal inputs
|
||||
response = generate(
|
||||
image_paths, audio_paths = self._collect_media(request, temp_files)
|
||||
|
||||
prompt = self._prepare_prompt(
|
||||
request,
|
||||
num_images=len(image_paths),
|
||||
num_audios=len(audio_paths),
|
||||
)
|
||||
|
||||
max_tokens, sampler_params, logits_params, stop_words = self._build_generation_params(request)
|
||||
sampler = make_sampler(**sampler_params)
|
||||
logits_processors = make_logits_processors(**logits_params) if logits_params else None
|
||||
|
||||
print(
|
||||
f"Generating text with MLX-VLM - max_tokens: {max_tokens}, "
|
||||
f"images: {len(image_paths)}, audios: {len(audio_paths)}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
|
||||
accumulated = []
|
||||
last_response = None
|
||||
for response in stream_generate(
|
||||
model=self.model,
|
||||
processor=self.processor,
|
||||
prompt=prompt,
|
||||
image=image_paths if image_paths else None,
|
||||
audio=audio_paths if audio_paths else None,
|
||||
max_tokens=max_tokens,
|
||||
temperature=generation_params.get('temp', 0.6),
|
||||
top_p=generation_params.get('top_p', 1.0),
|
||||
verbose=False
|
||||
sampler=sampler,
|
||||
logits_processors=logits_processors,
|
||||
):
|
||||
accumulated.append(response.text)
|
||||
last_response = response
|
||||
if stop_words and any(s in "".join(accumulated) for s in stop_words):
|
||||
break
|
||||
|
||||
full_text = self._truncate_at_stop("".join(accumulated), stop_words)
|
||||
content, reasoning_content, tool_calls_proto, prompt_tokens, completion_tokens, logprobs_bytes = (
|
||||
self._finalize_output(request, full_text, last_response)
|
||||
)
|
||||
|
||||
return backend_pb2.Reply(message=bytes(response, encoding='utf-8'))
|
||||
|
||||
|
||||
return backend_pb2.Reply(
|
||||
message=bytes(content, encoding='utf-8'),
|
||||
prompt_tokens=prompt_tokens,
|
||||
tokens=completion_tokens,
|
||||
logprobs=logprobs_bytes,
|
||||
chat_deltas=[
|
||||
backend_pb2.ChatDelta(
|
||||
content=content,
|
||||
reasoning_content=reasoning_content,
|
||||
tool_calls=tool_calls_proto,
|
||||
)
|
||||
],
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error in MLX-VLM Predict: {e}", file=sys.stderr)
|
||||
context.set_code(grpc.StatusCode.INTERNAL)
|
||||
context.set_details(f"Generation failed: {str(e)}")
|
||||
return backend_pb2.Reply(message=bytes("", encoding='utf-8'))
|
||||
finally:
|
||||
# Clean up temporary files
|
||||
self.cleanup_temp_files(temp_files)
|
||||
|
||||
def Embedding(self, request, context):
|
||||
"""
|
||||
A gRPC method that calculates embeddings for a given sentence.
|
||||
|
||||
|
||||
Note: MLX-VLM doesn't support embeddings directly. This method returns an error.
|
||||
|
||||
Args:
|
||||
|
|
@ -199,6 +215,79 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
context.set_details("Embeddings are not supported in the MLX-VLM backend.")
|
||||
return backend_pb2.EmbeddingResult()
|
||||
|
||||
def _collect_media(self, request, temp_files):
|
||||
"""Decode base64 Images and Audios into temp file paths.
|
||||
|
||||
Appends every temp file to ``temp_files`` so the finally block can
|
||||
clean up even on mid-generation errors.
|
||||
"""
|
||||
image_paths = []
|
||||
audio_paths = []
|
||||
if request.Images:
|
||||
for img_data in request.Images:
|
||||
img_path = self.load_image_from_base64(img_data)
|
||||
if img_path:
|
||||
image_paths.append(img_path)
|
||||
temp_files.append(img_path)
|
||||
if request.Audios:
|
||||
for audio_data in request.Audios:
|
||||
audio_path = self.load_audio_from_base64(audio_data)
|
||||
if audio_path:
|
||||
audio_paths.append(audio_path)
|
||||
temp_files.append(audio_path)
|
||||
return image_paths, audio_paths
|
||||
|
||||
async def TokenizeString(self, request, context):
|
||||
"""Tokenize ``request.Prompt`` via the processor's tokenizer."""
|
||||
if not hasattr(self, "processor") or self.processor is None:
|
||||
context.set_code(grpc.StatusCode.FAILED_PRECONDITION)
|
||||
context.set_details("processor not loaded")
|
||||
return backend_pb2.TokenizationResponse()
|
||||
try:
|
||||
tokenizer = (
|
||||
self.processor.tokenizer
|
||||
if hasattr(self.processor, "tokenizer")
|
||||
else self.processor
|
||||
)
|
||||
tokens = tokenizer.encode(request.Prompt)
|
||||
if hasattr(tokens, "tolist"):
|
||||
tokens = tokens.tolist()
|
||||
tokens = list(tokens)
|
||||
return backend_pb2.TokenizationResponse(length=len(tokens), tokens=tokens)
|
||||
except Exception as e:
|
||||
context.set_code(grpc.StatusCode.INTERNAL)
|
||||
context.set_details(str(e))
|
||||
return backend_pb2.TokenizationResponse()
|
||||
|
||||
async def Free(self, request, context):
|
||||
"""Drop the loaded model, processor and tool module."""
|
||||
try:
|
||||
if hasattr(self, "model"):
|
||||
del self.model
|
||||
if hasattr(self, "processor"):
|
||||
del self.processor
|
||||
if hasattr(self, "config"):
|
||||
del self.config
|
||||
self.tool_module = None
|
||||
gc.collect()
|
||||
# mlx.clear_cache (mlx >= 0.30) supersedes mlx.metal.clear_cache.
|
||||
try:
|
||||
if hasattr(mx, "clear_cache"):
|
||||
mx.clear_cache()
|
||||
elif hasattr(mx, "metal") and hasattr(mx.metal, "clear_cache"):
|
||||
mx.metal.clear_cache()
|
||||
except Exception:
|
||||
pass
|
||||
try:
|
||||
import torch # type: ignore
|
||||
if torch.cuda.is_available():
|
||||
torch.cuda.empty_cache()
|
||||
except Exception:
|
||||
pass
|
||||
return backend_pb2.Result(success=True, message="MLX-VLM model freed")
|
||||
except Exception as e:
|
||||
return backend_pb2.Result(success=False, message=str(e))
|
||||
|
||||
async def PredictStream(self, request, context):
|
||||
"""
|
||||
Generates text based on the given prompt and sampling parameters, and streams the results using MLX-VLM with multimodal support.
|
||||
|
|
@ -212,36 +301,28 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
"""
|
||||
temp_files = []
|
||||
try:
|
||||
# Process images and audios from request
|
||||
image_paths = []
|
||||
audio_paths = []
|
||||
|
||||
# Process images
|
||||
if request.Images:
|
||||
for img_data in request.Images:
|
||||
img_path = self.load_image_from_base64(img_data)
|
||||
if img_path:
|
||||
image_paths.append(img_path)
|
||||
temp_files.append(img_path)
|
||||
|
||||
# Process audios
|
||||
if request.Audios:
|
||||
for audio_data in request.Audios:
|
||||
audio_path = self.load_audio_from_base64(audio_data)
|
||||
if audio_path:
|
||||
audio_paths.append(audio_path)
|
||||
temp_files.append(audio_path)
|
||||
|
||||
# Prepare the prompt with multimodal information
|
||||
prompt = self._prepare_prompt(request, num_images=len(image_paths), num_audios=len(audio_paths))
|
||||
|
||||
# Build generation parameters using request attributes and options
|
||||
max_tokens, generation_params = self._build_generation_params(request, default_max_tokens=512)
|
||||
|
||||
print(f"Streaming text with MLX-VLM - max_tokens: {max_tokens}, params: {generation_params}", file=sys.stderr)
|
||||
print(f"Images: {len(image_paths)}, Audios: {len(audio_paths)}", file=sys.stderr)
|
||||
|
||||
# Stream text generation using MLX-VLM with multimodal inputs
|
||||
image_paths, audio_paths = self._collect_media(request, temp_files)
|
||||
|
||||
prompt = self._prepare_prompt(
|
||||
request,
|
||||
num_images=len(image_paths),
|
||||
num_audios=len(audio_paths),
|
||||
)
|
||||
|
||||
max_tokens, sampler_params, logits_params, stop_words = self._build_generation_params(
|
||||
request, default_max_tokens=512
|
||||
)
|
||||
sampler = make_sampler(**sampler_params)
|
||||
logits_processors = make_logits_processors(**logits_params) if logits_params else None
|
||||
|
||||
print(
|
||||
f"Streaming text with MLX-VLM - max_tokens: {max_tokens}, "
|
||||
f"images: {len(image_paths)}, audios: {len(audio_paths)}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
|
||||
accumulated = []
|
||||
last_response = None
|
||||
for response in stream_generate(
|
||||
model=self.model,
|
||||
processor=self.processor,
|
||||
|
|
@ -249,77 +330,91 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
image=image_paths if image_paths else None,
|
||||
audio=audio_paths if audio_paths else None,
|
||||
max_tokens=max_tokens,
|
||||
temperature=generation_params.get('temp', 0.6),
|
||||
top_p=generation_params.get('top_p', 1.0),
|
||||
sampler=sampler,
|
||||
logits_processors=logits_processors,
|
||||
):
|
||||
yield backend_pb2.Reply(message=bytes(response.text, encoding='utf-8'))
|
||||
|
||||
accumulated.append(response.text)
|
||||
last_response = response
|
||||
yield backend_pb2.Reply(
|
||||
message=bytes(response.text, encoding='utf-8'),
|
||||
chat_deltas=[backend_pb2.ChatDelta(content=response.text)],
|
||||
)
|
||||
if stop_words and any(s in "".join(accumulated) for s in stop_words):
|
||||
break
|
||||
|
||||
full_text = self._truncate_at_stop("".join(accumulated), stop_words)
|
||||
content, reasoning_content, tool_calls_proto, prompt_tokens, completion_tokens, logprobs_bytes = (
|
||||
self._finalize_output(request, full_text, last_response)
|
||||
)
|
||||
yield backend_pb2.Reply(
|
||||
message=b"",
|
||||
prompt_tokens=prompt_tokens,
|
||||
tokens=completion_tokens,
|
||||
logprobs=logprobs_bytes,
|
||||
chat_deltas=[
|
||||
backend_pb2.ChatDelta(
|
||||
content="",
|
||||
reasoning_content=reasoning_content,
|
||||
tool_calls=tool_calls_proto,
|
||||
)
|
||||
],
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error in MLX-VLM PredictStream: {e}", file=sys.stderr)
|
||||
context.set_code(grpc.StatusCode.INTERNAL)
|
||||
context.set_details(f"Streaming generation failed: {str(e)}")
|
||||
yield backend_pb2.Reply(message=bytes("", encoding='utf-8'))
|
||||
finally:
|
||||
# Clean up temporary files
|
||||
self.cleanup_temp_files(temp_files)
|
||||
|
||||
def _build_template_kwargs(self, request, num_images, num_audios):
|
||||
"""Collect kwargs for ``apply_chat_template`` that survive model variants."""
|
||||
kwargs = {"num_images": num_images, "num_audios": num_audios}
|
||||
if request.Tools:
|
||||
try:
|
||||
kwargs["tools"] = json.loads(request.Tools)
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
if request.Metadata.get("enable_thinking", "").lower() == "true":
|
||||
kwargs["enable_thinking"] = True
|
||||
return kwargs
|
||||
|
||||
def _apply_template(self, request, messages, num_images, num_audios):
|
||||
kwargs = self._build_template_kwargs(request, num_images, num_audios)
|
||||
try:
|
||||
return apply_chat_template(self.processor, self.config, messages, **kwargs)
|
||||
except TypeError:
|
||||
# Fallback for older mlx-vlm versions that reject tools=/enable_thinking=
|
||||
return apply_chat_template(
|
||||
self.processor,
|
||||
self.config,
|
||||
messages,
|
||||
num_images=num_images,
|
||||
num_audios=num_audios,
|
||||
)
|
||||
|
||||
def _prepare_prompt(self, request, num_images=0, num_audios=0):
|
||||
"""
|
||||
Prepare the prompt for MLX-VLM generation, handling chat templates and multimodal inputs.
|
||||
|
||||
Args:
|
||||
request: The gRPC request containing prompt and message information.
|
||||
num_images: Number of images in the request.
|
||||
num_audios: Number of audio files in the request.
|
||||
|
||||
Returns:
|
||||
str: The prepared prompt.
|
||||
Prepare the prompt for MLX-VLM generation, handling chat templates and
|
||||
multimodal inputs. Forwards tool definitions and enable_thinking when
|
||||
present on the request.
|
||||
"""
|
||||
# If tokenizer template is enabled and messages are provided instead of prompt, apply the tokenizer template
|
||||
if not request.Prompt and request.UseTokenizerTemplate and request.Messages:
|
||||
# Convert gRPC messages to the format expected by apply_chat_template
|
||||
messages = []
|
||||
for msg in request.Messages:
|
||||
messages.append({"role": msg.role, "content": msg.content})
|
||||
|
||||
# Use mlx-vlm's apply_chat_template which handles multimodal inputs
|
||||
prompt = apply_chat_template(
|
||||
self.processor,
|
||||
self.config,
|
||||
messages,
|
||||
num_images=num_images,
|
||||
num_audios=num_audios
|
||||
)
|
||||
return prompt
|
||||
elif request.Prompt:
|
||||
# If we have a direct prompt but also have images/audio, we need to format it properly
|
||||
messages = messages_to_dicts(request.Messages)
|
||||
return self._apply_template(request, messages, num_images, num_audios)
|
||||
|
||||
if request.Prompt:
|
||||
if num_images > 0 or num_audios > 0:
|
||||
# Create a simple message structure for multimodal prompt
|
||||
messages = [{"role": "user", "content": request.Prompt}]
|
||||
prompt = apply_chat_template(
|
||||
self.processor,
|
||||
self.config,
|
||||
messages,
|
||||
num_images=num_images,
|
||||
num_audios=num_audios
|
||||
)
|
||||
return prompt
|
||||
else:
|
||||
return request.Prompt
|
||||
else:
|
||||
# Fallback to empty prompt with multimodal template if we have media
|
||||
if num_images > 0 or num_audios > 0:
|
||||
messages = [{"role": "user", "content": ""}]
|
||||
prompt = apply_chat_template(
|
||||
self.processor,
|
||||
self.config,
|
||||
messages,
|
||||
num_images=num_images,
|
||||
num_audios=num_audios
|
||||
)
|
||||
return prompt
|
||||
else:
|
||||
return ""
|
||||
return self._apply_template(request, messages, num_images, num_audios)
|
||||
return request.Prompt
|
||||
|
||||
# Fallback to empty prompt with multimodal template if we have media
|
||||
if num_images > 0 or num_audios > 0:
|
||||
messages = [{"role": "user", "content": ""}]
|
||||
return self._apply_template(request, messages, num_images, num_audios)
|
||||
return ""
|
||||
|
||||
|
||||
|
||||
|
|
@ -327,62 +422,122 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
|
||||
def _build_generation_params(self, request, default_max_tokens=200):
|
||||
"""
|
||||
Build generation parameters from request attributes and options for MLX-VLM.
|
||||
|
||||
Args:
|
||||
request: The gRPC request.
|
||||
default_max_tokens: Default max_tokens if not specified.
|
||||
Build generation parameters from request attributes and options.
|
||||
|
||||
Returns:
|
||||
tuple: (max_tokens, generation_params dict)
|
||||
tuple: (max_tokens, sampler_params, logits_params, stop_words)
|
||||
"""
|
||||
# Extract max_tokens
|
||||
max_tokens = getattr(request, 'Tokens', default_max_tokens)
|
||||
if max_tokens == 0:
|
||||
max_tokens = default_max_tokens
|
||||
|
||||
# Extract generation parameters from request attributes
|
||||
temp = getattr(request, 'Temperature', 0.0)
|
||||
if temp == 0.0:
|
||||
temp = 0.6 # Default temperature
|
||||
|
||||
top_p = getattr(request, 'TopP', 0.0)
|
||||
if top_p == 0.0:
|
||||
top_p = 1.0 # Default top_p
|
||||
|
||||
# Initialize generation parameters for MLX-VLM
|
||||
generation_params = {
|
||||
max_tokens = getattr(request, 'Tokens', default_max_tokens) or default_max_tokens
|
||||
|
||||
temp = getattr(request, 'Temperature', 0.0) or 0.6
|
||||
top_p = getattr(request, 'TopP', 0.0) or 1.0
|
||||
min_p = getattr(request, 'MinP', 0.0) or 0.0
|
||||
top_k = getattr(request, 'TopK', 0) or 0
|
||||
|
||||
sampler_params = {
|
||||
'temp': temp,
|
||||
'top_p': top_p,
|
||||
'min_p': min_p,
|
||||
'top_k': top_k,
|
||||
}
|
||||
|
||||
# Add seed if specified
|
||||
|
||||
logits_params = {}
|
||||
repetition_penalty = getattr(request, 'RepetitionPenalty', 0.0) or 0.0
|
||||
if repetition_penalty and repetition_penalty != 1.0:
|
||||
logits_params['repetition_penalty'] = repetition_penalty
|
||||
presence_penalty = getattr(request, 'PresencePenalty', 0.0) or 0.0
|
||||
if presence_penalty:
|
||||
logits_params['presence_penalty'] = presence_penalty
|
||||
frequency_penalty = getattr(request, 'FrequencyPenalty', 0.0) or 0.0
|
||||
if frequency_penalty:
|
||||
logits_params['frequency_penalty'] = frequency_penalty
|
||||
|
||||
seed = getattr(request, 'Seed', 0)
|
||||
if seed != 0:
|
||||
mx.random.seed(seed)
|
||||
|
||||
# Override with options if available
|
||||
|
||||
if hasattr(self, 'options'):
|
||||
# Max tokens from options
|
||||
if 'max_tokens' in self.options:
|
||||
max_tokens = self.options['max_tokens']
|
||||
|
||||
# Generation parameters from options
|
||||
param_option_mapping = {
|
||||
'temp': 'temp',
|
||||
'temperature': 'temp', # alias
|
||||
'top_p': 'top_p',
|
||||
option_mapping = {
|
||||
'temp': 'temp', 'temperature': 'temp',
|
||||
'top_p': 'top_p', 'min_p': 'min_p', 'top_k': 'top_k',
|
||||
}
|
||||
|
||||
for option_key, param_key in param_option_mapping.items():
|
||||
for option_key, param_key in option_mapping.items():
|
||||
if option_key in self.options:
|
||||
generation_params[param_key] = self.options[option_key]
|
||||
|
||||
# Handle seed from options
|
||||
sampler_params[param_key] = self.options[option_key]
|
||||
for option_key in ('repetition_penalty', 'presence_penalty', 'frequency_penalty'):
|
||||
if option_key in self.options:
|
||||
logits_params[option_key] = self.options[option_key]
|
||||
if 'seed' in self.options:
|
||||
mx.random.seed(self.options['seed'])
|
||||
|
||||
return max_tokens, generation_params
|
||||
|
||||
stop_words = list(getattr(request, 'StopPrompts', []) or [])
|
||||
return max_tokens, sampler_params, logits_params, stop_words
|
||||
|
||||
def _finalize_output(self, request, generated_text, last_response):
|
||||
"""Split reasoning + tool calls out of generated_text and return the
|
||||
tuple consumed by Reply-builders."""
|
||||
content = generated_text
|
||||
reasoning_content = ""
|
||||
|
||||
if getattr(self, "has_thinking", False):
|
||||
reasoning_content, content = split_reasoning(content, self.think_start, self.think_end)
|
||||
|
||||
tool_calls_proto: List[backend_pb2.ToolCallDelta] = []
|
||||
if self.tool_module is not None:
|
||||
parsed_tools = None
|
||||
if request.Tools:
|
||||
try:
|
||||
parsed_tools = json.loads(request.Tools)
|
||||
except json.JSONDecodeError:
|
||||
parsed_tools = None
|
||||
calls, content = parse_tool_calls(content, self.tool_module, parsed_tools)
|
||||
for c in calls:
|
||||
tool_calls_proto.append(
|
||||
backend_pb2.ToolCallDelta(
|
||||
index=c["index"],
|
||||
id=c["id"],
|
||||
name=c["name"],
|
||||
arguments=c["arguments"],
|
||||
)
|
||||
)
|
||||
|
||||
prompt_tokens = int(getattr(last_response, "prompt_tokens", 0) or 0) if last_response else 0
|
||||
completion_tokens = int(getattr(last_response, "generation_tokens", 0) or 0) if last_response else 0
|
||||
|
||||
logprobs_bytes = b""
|
||||
if last_response is not None and int(getattr(request, "Logprobs", 0) or 0) > 0:
|
||||
try:
|
||||
lp = getattr(last_response, "logprobs", None)
|
||||
if lp is not None:
|
||||
token_id = int(getattr(last_response, "token", 0) or 0)
|
||||
tokenizer = (
|
||||
self.processor.tokenizer
|
||||
if hasattr(self.processor, "tokenizer")
|
||||
else self.processor
|
||||
)
|
||||
token_text = tokenizer.decode([token_id]) if token_id else ""
|
||||
top_logprob = float(lp[token_id]) if hasattr(lp, "__getitem__") else 0.0
|
||||
logprobs_bytes = json.dumps(
|
||||
{"content": [{"token": token_text, "logprob": top_logprob}]}
|
||||
).encode("utf-8")
|
||||
except Exception as e:
|
||||
print(f"[mlx-vlm] Logprobs extraction failed: {e}", file=sys.stderr)
|
||||
|
||||
return content, reasoning_content, tool_calls_proto, prompt_tokens, completion_tokens, logprobs_bytes
|
||||
|
||||
def _truncate_at_stop(self, text, stop_words):
|
||||
if not stop_words:
|
||||
return text
|
||||
earliest = len(text)
|
||||
for stop in stop_words:
|
||||
if not stop:
|
||||
continue
|
||||
idx = text.find(stop)
|
||||
if idx >= 0 and idx < earliest:
|
||||
earliest = idx
|
||||
return text[:earliest] if earliest < len(text) else text
|
||||
|
||||
def load_image_from_base64(self, image_data: str):
|
||||
"""
|
||||
|
|
|
|||
|
|
@ -1,17 +1,19 @@
|
|||
import os
|
||||
import sys
|
||||
import types
|
||||
import unittest
|
||||
import subprocess
|
||||
import time
|
||||
|
||||
import grpc
|
||||
import backend_pb2
|
||||
import backend_pb2_grpc
|
||||
|
||||
import grpc
|
||||
|
||||
import unittest
|
||||
import subprocess
|
||||
import time
|
||||
import grpc
|
||||
import backend_pb2_grpc
|
||||
import backend_pb2
|
||||
# Make the shared helpers importable so we can unit-test them without a
|
||||
# running gRPC server.
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'common'))
|
||||
from python_utils import messages_to_dicts, parse_options
|
||||
from mlx_utils import parse_tool_calls, split_reasoning
|
||||
|
||||
class TestBackendServicer(unittest.TestCase):
|
||||
"""
|
||||
|
|
@ -143,4 +145,55 @@ class TestBackendServicer(unittest.TestCase):
|
|||
print(err)
|
||||
self.fail("Embedding service failed")
|
||||
finally:
|
||||
self.tearDown()
|
||||
self.tearDown()
|
||||
|
||||
|
||||
class TestSharedHelpers(unittest.TestCase):
|
||||
"""Server-less unit tests for the helpers the mlx-vlm backend depends on."""
|
||||
|
||||
def test_parse_options_typed(self):
|
||||
opts = parse_options(["temperature:0.7", "max_tokens:128", "trust:true", "name:hello"])
|
||||
self.assertEqual(opts["temperature"], 0.7)
|
||||
self.assertEqual(opts["max_tokens"], 128)
|
||||
self.assertIs(opts["trust"], True)
|
||||
self.assertEqual(opts["name"], "hello")
|
||||
|
||||
def test_messages_to_dicts_roundtrip(self):
|
||||
msgs = [
|
||||
backend_pb2.Message(role="user", content="hi"),
|
||||
backend_pb2.Message(
|
||||
role="assistant",
|
||||
content="",
|
||||
tool_calls='[{"id":"call_1","type":"function","function":{"name":"f","arguments":"{}"}}]',
|
||||
),
|
||||
backend_pb2.Message(
|
||||
role="tool",
|
||||
content="42",
|
||||
tool_call_id="call_1",
|
||||
name="f",
|
||||
),
|
||||
]
|
||||
out = messages_to_dicts(msgs)
|
||||
self.assertEqual(out[0], {"role": "user", "content": "hi"})
|
||||
self.assertEqual(out[1]["tool_calls"][0]["function"]["name"], "f")
|
||||
self.assertEqual(out[2]["tool_call_id"], "call_1")
|
||||
|
||||
def test_split_reasoning(self):
|
||||
r, c = split_reasoning("<think>plan</think>final", "<think>", "</think>")
|
||||
self.assertEqual(r, "plan")
|
||||
self.assertEqual(c, "final")
|
||||
|
||||
def test_parse_tool_calls_with_shim(self):
|
||||
tm = types.SimpleNamespace(
|
||||
tool_call_start="<tool_call>",
|
||||
tool_call_end="</tool_call>",
|
||||
parse_tool_call=lambda body, tools: {"name": "get_weather", "arguments": {"location": body.strip()}},
|
||||
)
|
||||
calls, remaining = parse_tool_calls(
|
||||
"<tool_call>Paris</tool_call>",
|
||||
tm,
|
||||
tools=None,
|
||||
)
|
||||
self.assertEqual(len(calls), 1)
|
||||
self.assertEqual(calls[0]["name"], "get_weather")
|
||||
self.assertEqual(calls[0]["arguments"], '{"location": "Paris"}')
|
||||
|
|
@ -2,11 +2,13 @@
|
|||
import asyncio
|
||||
from concurrent import futures
|
||||
import argparse
|
||||
import gc
|
||||
import json
|
||||
import signal
|
||||
import sys
|
||||
import os
|
||||
import types
|
||||
from typing import List
|
||||
import time
|
||||
|
||||
import backend_pb2
|
||||
import backend_pb2_grpc
|
||||
|
|
@ -15,13 +17,13 @@ import grpc
|
|||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'common'))
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'common'))
|
||||
from grpc_auth import get_auth_interceptors
|
||||
from python_utils import messages_to_dicts, parse_options
|
||||
from mlx_utils import parse_tool_calls, split_reasoning
|
||||
|
||||
from mlx_lm import load, generate, stream_generate
|
||||
from mlx_lm.sample_utils import make_sampler
|
||||
from mlx_lm import load, stream_generate
|
||||
from mlx_lm.sample_utils import make_logits_processors, make_sampler
|
||||
from mlx_lm.models.cache import make_prompt_cache, can_trim_prompt_cache, trim_prompt_cache
|
||||
import mlx.core as mx
|
||||
import base64
|
||||
import io
|
||||
|
||||
from mlx_cache import ThreadSafeLRUPromptCache
|
||||
|
||||
|
|
@ -30,21 +32,6 @@ _ONE_DAY_IN_SECONDS = 60 * 60 * 24
|
|||
# If MAX_WORKERS are specified in the environment use it, otherwise default to 1
|
||||
MAX_WORKERS = int(os.environ.get('PYTHON_GRPC_MAX_WORKERS', '1'))
|
||||
|
||||
def is_float(s):
|
||||
"""Check if a string can be converted to float."""
|
||||
try:
|
||||
float(s)
|
||||
return True
|
||||
except ValueError:
|
||||
return False
|
||||
def is_int(s):
|
||||
"""Check if a string can be converted to int."""
|
||||
try:
|
||||
int(s)
|
||||
return True
|
||||
except ValueError:
|
||||
return False
|
||||
|
||||
# Implement the BackendServicer class with the service methods
|
||||
class BackendServicer(backend_pb2_grpc.BackendServicer):
|
||||
"""
|
||||
|
|
@ -78,46 +65,27 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
try:
|
||||
print(f"Loading MLX model: {request.Model}", file=sys.stderr)
|
||||
print(f"Request: {request}", file=sys.stderr)
|
||||
|
||||
# Parse options like in the diffusers backend
|
||||
options = request.Options
|
||||
self.options = {}
|
||||
|
||||
# The options are a list of strings in this form optname:optvalue
|
||||
# We store all the options in a dict for later use
|
||||
for opt in options:
|
||||
if ":" not in opt:
|
||||
continue
|
||||
key, value = opt.split(":", 1) # Split only on first colon to handle values with colons
|
||||
|
||||
# Convert numeric values to appropriate types
|
||||
if is_float(value):
|
||||
value = float(value)
|
||||
elif is_int(value):
|
||||
value = int(value)
|
||||
elif value.lower() in ["true", "false"]:
|
||||
value = value.lower() == "true"
|
||||
|
||||
self.options[key] = value
|
||||
|
||||
|
||||
# Parse Options[] key:value strings into a typed dict (shared helper)
|
||||
self.options = parse_options(request.Options)
|
||||
print(f"Options: {self.options}", file=sys.stderr)
|
||||
|
||||
|
||||
# Build tokenizer config for MLX using options
|
||||
tokenizer_config = {}
|
||||
|
||||
|
||||
# Handle trust_remote_code from request or options
|
||||
if request.TrustRemoteCode or self.options.get("trust_remote_code", False):
|
||||
tokenizer_config["trust_remote_code"] = True
|
||||
|
||||
|
||||
# Handle EOS token from options
|
||||
if "eos_token" in self.options:
|
||||
tokenizer_config["eos_token"] = self.options["eos_token"]
|
||||
|
||||
|
||||
# Handle other tokenizer config options
|
||||
for key in ["pad_token", "bos_token", "unk_token", "sep_token", "cls_token", "mask_token"]:
|
||||
if key in self.options:
|
||||
tokenizer_config[key] = self.options[key]
|
||||
|
||||
|
||||
# Load model and tokenizer using MLX
|
||||
if tokenizer_config:
|
||||
print(f"Loading with tokenizer_config: {tokenizer_config}", file=sys.stderr)
|
||||
|
|
@ -125,6 +93,21 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
else:
|
||||
self.model, self.tokenizer = load(request.Model)
|
||||
|
||||
# mlx_lm.load() returns a TokenizerWrapper that detects tool
|
||||
# calling and thinking markers from the chat template / vocab.
|
||||
# mlx-lm >= 0.30 also exposes a parser callable on the wrapper;
|
||||
# earlier versions don't (we fall back to json.loads inside
|
||||
# _tool_module_from_tokenizer below).
|
||||
has_tools = bool(getattr(self.tokenizer, "has_tool_calling", False))
|
||||
has_thinking = bool(getattr(self.tokenizer, "has_thinking", False))
|
||||
tcs = getattr(self.tokenizer, "tool_call_start", None)
|
||||
tce = getattr(self.tokenizer, "tool_call_end", None)
|
||||
print(
|
||||
f"MLX tokenizer capabilities: has_tool_calling={has_tools} "
|
||||
f"has_thinking={has_thinking} tool_call_start={tcs!r} tool_call_end={tce!r}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
|
||||
# Initialize thread-safe LRU prompt cache for efficient generation
|
||||
max_cache_entries = self.options.get("max_cache_entries", 10)
|
||||
self.max_kv_size = self.options.get("max_kv_size", None)
|
||||
|
|
@ -134,7 +117,7 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
can_trim_fn=can_trim_prompt_cache,
|
||||
trim_fn=trim_prompt_cache,
|
||||
)
|
||||
|
||||
|
||||
except Exception as err:
|
||||
print(f"Error loading MLX model {err=}, {type(err)=}", file=sys.stderr)
|
||||
return backend_pb2.Result(success=False, message=f"Error loading MLX model: {err}")
|
||||
|
|
@ -172,30 +155,58 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
remaining_tokens = cache_key
|
||||
|
||||
# Build generation parameters using request attributes and options
|
||||
max_tokens, sampler_params = self._build_generation_params(request)
|
||||
max_tokens, sampler_params, logits_params, stop_words = self._build_generation_params(request)
|
||||
|
||||
print(f"Generating text with MLX - max_tokens: {max_tokens}, cache_hit: {len(remaining_tokens) < len(cache_key)}", file=sys.stderr)
|
||||
print(
|
||||
f"Generating text with MLX - max_tokens: {max_tokens}, "
|
||||
f"cache_hit: {len(remaining_tokens) < len(cache_key)}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
|
||||
# Create sampler with parameters
|
||||
# Create sampler and optional logits processors (penalties)
|
||||
sampler = make_sampler(**sampler_params)
|
||||
logits_processors = make_logits_processors(**logits_params) if logits_params else None
|
||||
|
||||
# Use stream_generate to track generated tokens for cache key
|
||||
# Use stream_generate to collect text + track tokens for cache key
|
||||
generated_text = []
|
||||
last_response = None
|
||||
for response in stream_generate(
|
||||
self.model,
|
||||
self.tokenizer,
|
||||
prompt=remaining_tokens if remaining_tokens else cache_key,
|
||||
max_tokens=max_tokens,
|
||||
sampler=sampler,
|
||||
logits_processors=logits_processors,
|
||||
prompt_cache=prompt_cache,
|
||||
):
|
||||
generated_text.append(response.text)
|
||||
cache_key.append(response.token)
|
||||
last_response = response
|
||||
# Early stop on user-provided stop sequences
|
||||
if stop_words and any(s in "".join(generated_text) for s in stop_words):
|
||||
break
|
||||
|
||||
# Insert completed cache
|
||||
self.lru_cache.insert_cache(self.model_key, cache_key, prompt_cache)
|
||||
|
||||
return backend_pb2.Reply(message=bytes(''.join(generated_text), encoding='utf-8'))
|
||||
full_text = self._truncate_at_stop("".join(generated_text), stop_words)
|
||||
content, reasoning_content, tool_calls_proto, prompt_tokens, completion_tokens, logprobs_bytes = (
|
||||
self._finalize_output(request, full_text, last_response)
|
||||
)
|
||||
|
||||
return backend_pb2.Reply(
|
||||
message=bytes(content, encoding='utf-8'),
|
||||
prompt_tokens=prompt_tokens,
|
||||
tokens=completion_tokens,
|
||||
logprobs=logprobs_bytes,
|
||||
chat_deltas=[
|
||||
backend_pb2.ChatDelta(
|
||||
content=content,
|
||||
reasoning_content=reasoning_content,
|
||||
tool_calls=tool_calls_proto,
|
||||
)
|
||||
],
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error in MLX Predict: {e}", file=sys.stderr)
|
||||
|
|
@ -206,7 +217,7 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
def Embedding(self, request, context):
|
||||
"""
|
||||
A gRPC method that calculates embeddings for a given sentence.
|
||||
|
||||
|
||||
Note: MLX-LM doesn't support embeddings directly. This method returns an error.
|
||||
|
||||
Args:
|
||||
|
|
@ -221,6 +232,62 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
context.set_details("Embeddings are not supported in the MLX backend.")
|
||||
return backend_pb2.EmbeddingResult()
|
||||
|
||||
async def TokenizeString(self, request, context):
|
||||
"""Tokenize ``request.Prompt`` using the loaded model's tokenizer."""
|
||||
if not hasattr(self, "tokenizer") or self.tokenizer is None:
|
||||
context.set_code(grpc.StatusCode.FAILED_PRECONDITION)
|
||||
context.set_details("tokenizer not loaded")
|
||||
return backend_pb2.TokenizationResponse()
|
||||
try:
|
||||
tokens = self.tokenizer.encode(request.Prompt)
|
||||
if hasattr(tokens, "tolist"):
|
||||
tokens = tokens.tolist()
|
||||
tokens = list(tokens)
|
||||
return backend_pb2.TokenizationResponse(length=len(tokens), tokens=tokens)
|
||||
except Exception as e:
|
||||
context.set_code(grpc.StatusCode.INTERNAL)
|
||||
context.set_details(str(e))
|
||||
return backend_pb2.TokenizationResponse()
|
||||
|
||||
async def Free(self, request, context):
|
||||
"""Drop the loaded model, tokenizer and prompt cache.
|
||||
|
||||
Metal / CUDA memory is released via ``gc.collect()`` + the
|
||||
platform-specific cache clear hooks when available.
|
||||
"""
|
||||
try:
|
||||
if hasattr(self, "model"):
|
||||
del self.model
|
||||
if hasattr(self, "tokenizer"):
|
||||
del self.tokenizer
|
||||
if hasattr(self, "lru_cache") and self.lru_cache is not None:
|
||||
try:
|
||||
self.lru_cache.clear()
|
||||
except Exception:
|
||||
pass
|
||||
self.lru_cache = None
|
||||
gc.collect()
|
||||
# Metal: drop the cached allocator. mlx.clear_cache (mlx >= 0.30)
|
||||
# supersedes the now-deprecated mlx.metal.clear_cache.
|
||||
try:
|
||||
if hasattr(mx, "clear_cache"):
|
||||
mx.clear_cache()
|
||||
elif hasattr(mx, "metal") and hasattr(mx.metal, "clear_cache"):
|
||||
mx.metal.clear_cache()
|
||||
except Exception:
|
||||
pass
|
||||
# CUDA: release the torch cache if a CUDA-backed mlx variant
|
||||
# happens to be installed alongside torch (best-effort).
|
||||
try:
|
||||
import torch # type: ignore
|
||||
if torch.cuda.is_available():
|
||||
torch.cuda.empty_cache()
|
||||
except Exception:
|
||||
pass
|
||||
return backend_pb2.Result(success=True, message="MLX model freed")
|
||||
except Exception as e:
|
||||
return backend_pb2.Result(success=False, message=str(e))
|
||||
|
||||
async def PredictStream(self, request, context):
|
||||
"""
|
||||
Generates text based on the given prompt and sampling parameters, and streams the results using MLX.
|
||||
|
|
@ -251,24 +318,64 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
remaining_tokens = cache_key
|
||||
|
||||
# Build generation parameters using request attributes and options
|
||||
max_tokens, sampler_params = self._build_generation_params(request, default_max_tokens=512)
|
||||
max_tokens, sampler_params, logits_params, stop_words = self._build_generation_params(
|
||||
request, default_max_tokens=512
|
||||
)
|
||||
|
||||
print(f"Streaming text with MLX - max_tokens: {max_tokens}, cache_hit: {len(remaining_tokens) < len(cache_key)}", file=sys.stderr)
|
||||
print(
|
||||
f"Streaming text with MLX - max_tokens: {max_tokens}, "
|
||||
f"cache_hit: {len(remaining_tokens) < len(cache_key)}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
|
||||
# Create sampler with parameters
|
||||
# Create sampler and optional logits processors (penalties)
|
||||
sampler = make_sampler(**sampler_params)
|
||||
logits_processors = make_logits_processors(**logits_params) if logits_params else None
|
||||
|
||||
# Stream text generation using MLX with proper parameters
|
||||
accumulated = []
|
||||
last_response = None
|
||||
for response in stream_generate(
|
||||
self.model,
|
||||
self.tokenizer,
|
||||
prompt=remaining_tokens if remaining_tokens else cache_key,
|
||||
max_tokens=max_tokens,
|
||||
sampler=sampler,
|
||||
logits_processors=logits_processors,
|
||||
prompt_cache=prompt_cache,
|
||||
):
|
||||
cache_key.append(response.token)
|
||||
yield backend_pb2.Reply(message=bytes(response.text, encoding='utf-8'))
|
||||
accumulated.append(response.text)
|
||||
last_response = response
|
||||
# Emit a content delta. Structured reasoning / tool parsing
|
||||
# happens on the final chunk so we don't fragment the state
|
||||
# machine in v1.
|
||||
yield backend_pb2.Reply(
|
||||
message=bytes(response.text, encoding='utf-8'),
|
||||
chat_deltas=[backend_pb2.ChatDelta(content=response.text)],
|
||||
)
|
||||
# Early stop on user-provided stop sequences
|
||||
if stop_words and any(s in "".join(accumulated) for s in stop_words):
|
||||
break
|
||||
|
||||
# Final chunk: run reasoning + tool parsing on accumulated text
|
||||
# and emit the structured ChatDelta with token counts + logprobs.
|
||||
full_text = self._truncate_at_stop("".join(accumulated), stop_words)
|
||||
content, reasoning_content, tool_calls_proto, prompt_tokens, completion_tokens, logprobs_bytes = (
|
||||
self._finalize_output(request, full_text, last_response)
|
||||
)
|
||||
yield backend_pb2.Reply(
|
||||
message=b"",
|
||||
prompt_tokens=prompt_tokens,
|
||||
tokens=completion_tokens,
|
||||
logprobs=logprobs_bytes,
|
||||
chat_deltas=[
|
||||
backend_pb2.ChatDelta(
|
||||
content="",
|
||||
reasoning_content=reasoning_content,
|
||||
tool_calls=tool_calls_proto,
|
||||
)
|
||||
],
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error in MLX PredictStream: {e}", file=sys.stderr)
|
||||
|
|
@ -294,21 +401,33 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
Returns:
|
||||
str: The prepared prompt.
|
||||
"""
|
||||
# If tokenizer template is enabled and messages are provided instead of prompt, apply the tokenizer template
|
||||
# If tokenizer template is enabled and messages are provided instead
|
||||
# of prompt, apply the tokenizer template (forwards tool definitions
|
||||
# and enable_thinking when the model supports them).
|
||||
if not request.Prompt and request.UseTokenizerTemplate and request.Messages:
|
||||
# Convert gRPC messages to the format expected by apply_chat_template
|
||||
messages = []
|
||||
for msg in request.Messages:
|
||||
messages.append({"role": msg.role, "content": msg.content})
|
||||
messages = messages_to_dicts(request.Messages)
|
||||
|
||||
prompt = self.tokenizer.apply_chat_template(
|
||||
messages,
|
||||
tokenize=False,
|
||||
add_generation_prompt=True
|
||||
)
|
||||
return prompt
|
||||
else:
|
||||
return request.Prompt
|
||||
kwargs = {"tokenize": False, "add_generation_prompt": True}
|
||||
if request.Tools:
|
||||
try:
|
||||
kwargs["tools"] = json.loads(request.Tools)
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
enable_thinking = request.Metadata.get("enable_thinking", "").lower()
|
||||
if enable_thinking == "true":
|
||||
kwargs["enable_thinking"] = True
|
||||
|
||||
try:
|
||||
return self.tokenizer.apply_chat_template(messages, **kwargs)
|
||||
except TypeError:
|
||||
# Fallback for tokenizers whose template doesn't accept
|
||||
# tools= or enable_thinking=.
|
||||
return self.tokenizer.apply_chat_template(
|
||||
messages,
|
||||
tokenize=False,
|
||||
add_generation_prompt=True,
|
||||
)
|
||||
return request.Prompt
|
||||
|
||||
def _get_tokens_from_prompt(self, prompt_text: str) -> List[int]:
|
||||
"""
|
||||
|
|
@ -338,18 +457,19 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
default_max_tokens: Default max_tokens if not specified.
|
||||
|
||||
Returns:
|
||||
tuple: (max_tokens, sampler_params dict)
|
||||
tuple: (max_tokens, sampler_params dict, logits_processor_params dict,
|
||||
stop_words list)
|
||||
"""
|
||||
# Extract max_tokens
|
||||
max_tokens = getattr(request, 'Tokens', default_max_tokens)
|
||||
if max_tokens == 0:
|
||||
max_tokens = default_max_tokens
|
||||
|
||||
|
||||
# Extract sampler parameters from request attributes
|
||||
temp = getattr(request, 'Temperature', 0.0)
|
||||
if temp == 0.0:
|
||||
temp = 0.6 # Default temperature
|
||||
|
||||
|
||||
top_p = getattr(request, 'TopP', 0.0)
|
||||
if top_p == 0.0:
|
||||
top_p = 1.0 # Default top_p
|
||||
|
|
@ -369,18 +489,31 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
'xtc_threshold': 0.0,
|
||||
'xtc_probability': 0.0,
|
||||
}
|
||||
|
||||
|
||||
# Logits processor parameters — only set fields the request actually
|
||||
# provides so we can feed them unconditionally to make_logits_processors.
|
||||
logits_params = {}
|
||||
repetition_penalty = getattr(request, 'RepetitionPenalty', 0.0) or 0.0
|
||||
if repetition_penalty and repetition_penalty != 1.0:
|
||||
logits_params['repetition_penalty'] = repetition_penalty
|
||||
presence_penalty = getattr(request, 'PresencePenalty', 0.0) or 0.0
|
||||
if presence_penalty:
|
||||
logits_params['presence_penalty'] = presence_penalty
|
||||
frequency_penalty = getattr(request, 'FrequencyPenalty', 0.0) or 0.0
|
||||
if frequency_penalty:
|
||||
logits_params['frequency_penalty'] = frequency_penalty
|
||||
|
||||
# Add seed if specified
|
||||
seed = getattr(request, 'Seed', 0)
|
||||
if seed != 0:
|
||||
mx.random.seed(seed)
|
||||
|
||||
|
||||
# Override with options if available
|
||||
if hasattr(self, 'options'):
|
||||
# Max tokens from options
|
||||
if 'max_tokens' in self.options:
|
||||
max_tokens = self.options['max_tokens']
|
||||
|
||||
|
||||
# Sampler parameters from options
|
||||
sampler_option_mapping = {
|
||||
'temp': 'temp',
|
||||
|
|
@ -391,32 +524,142 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|||
'xtc_threshold': 'xtc_threshold',
|
||||
'xtc_probability': 'xtc_probability',
|
||||
}
|
||||
|
||||
|
||||
for option_key, param_key in sampler_option_mapping.items():
|
||||
if option_key in self.options:
|
||||
sampler_params[param_key] = self.options[option_key]
|
||||
|
||||
|
||||
# Logits processor overrides
|
||||
for option_key in ('repetition_penalty', 'presence_penalty', 'frequency_penalty'):
|
||||
if option_key in self.options:
|
||||
logits_params[option_key] = self.options[option_key]
|
||||
|
||||
# Handle seed from options
|
||||
if 'seed' in self.options:
|
||||
mx.random.seed(self.options['seed'])
|
||||
|
||||
|
||||
# Special tokens for XTC sampling (if tokenizer has eos_token_ids)
|
||||
xtc_special_tokens = []
|
||||
if hasattr(self.tokenizer, 'eos_token_ids') and self.tokenizer.eos_token_ids:
|
||||
xtc_special_tokens = list(self.tokenizer.eos_token_ids)
|
||||
elif hasattr(self.tokenizer, 'eos_token_id') and self.tokenizer.eos_token_id is not None:
|
||||
xtc_special_tokens = [self.tokenizer.eos_token_id]
|
||||
|
||||
|
||||
# Add newline token if available
|
||||
try:
|
||||
newline_tokens = self.tokenizer.encode("\n")
|
||||
xtc_special_tokens.extend(newline_tokens)
|
||||
except:
|
||||
except Exception:
|
||||
pass # Skip if encoding fails
|
||||
|
||||
|
||||
sampler_params['xtc_special_tokens'] = xtc_special_tokens
|
||||
|
||||
return max_tokens, sampler_params
|
||||
|
||||
# Stop sequences are applied post-decode (mlx-lm doesn't have a
|
||||
# built-in stop-sequence sampler param). Preserve the list here.
|
||||
stop_words = list(getattr(request, 'StopPrompts', []) or [])
|
||||
|
||||
return max_tokens, sampler_params, logits_params, stop_words
|
||||
|
||||
def _tool_module_from_tokenizer(self):
|
||||
"""Build a duck-typed tool module from the TokenizerWrapper.
|
||||
|
||||
On mlx-lm >= 0.30 the wrapper exposes a ``tool_parser`` callable
|
||||
that's been resolved from the model's chat template. On older
|
||||
releases (e.g. 0.29.x) the wrapper only carries the start/end
|
||||
markers — fall back to ``json.loads`` of the body, which matches
|
||||
what ``mlx_lm.tool_parsers.json_tools.parse_tool_call`` does on
|
||||
HEAD and covers the only format 0.29 detects (``<tool_call>``).
|
||||
"""
|
||||
start = getattr(self.tokenizer, "tool_call_start", None)
|
||||
end = getattr(self.tokenizer, "tool_call_end", None)
|
||||
if not start:
|
||||
return None
|
||||
parse_fn = getattr(self.tokenizer, "tool_parser", None)
|
||||
if parse_fn is None:
|
||||
def parse_fn(body, tools): # noqa: E306 — local fallback
|
||||
return json.loads(body.strip())
|
||||
return types.SimpleNamespace(
|
||||
tool_call_start=start,
|
||||
tool_call_end=end or "",
|
||||
parse_tool_call=parse_fn,
|
||||
)
|
||||
|
||||
def _finalize_output(self, request, generated_text, last_response):
|
||||
"""Build a ChatDelta + token counts + logprobs from accumulated output.
|
||||
|
||||
Returns ``(content, reasoning_content, tool_calls_proto,
|
||||
prompt_token_count, completion_token_count, logprobs_bytes)``.
|
||||
"""
|
||||
content = generated_text
|
||||
reasoning_content = ""
|
||||
|
||||
if getattr(self.tokenizer, "has_thinking", False):
|
||||
think_start = getattr(self.tokenizer, "think_start", "") or ""
|
||||
think_end = getattr(self.tokenizer, "think_end", "") or ""
|
||||
reasoning_content, content = split_reasoning(content, think_start, think_end)
|
||||
|
||||
tool_calls_proto: List[backend_pb2.ToolCallDelta] = []
|
||||
tool_module = None
|
||||
if getattr(self.tokenizer, "has_tool_calling", False):
|
||||
tool_module = self._tool_module_from_tokenizer()
|
||||
if tool_module is not None:
|
||||
parsed_tools = None
|
||||
if request.Tools:
|
||||
try:
|
||||
parsed_tools = json.loads(request.Tools)
|
||||
except json.JSONDecodeError:
|
||||
parsed_tools = None
|
||||
calls, content = parse_tool_calls(content, tool_module, parsed_tools)
|
||||
for c in calls:
|
||||
tool_calls_proto.append(
|
||||
backend_pb2.ToolCallDelta(
|
||||
index=c["index"],
|
||||
id=c["id"],
|
||||
name=c["name"],
|
||||
arguments=c["arguments"],
|
||||
)
|
||||
)
|
||||
|
||||
prompt_token_count = int(getattr(last_response, "prompt_tokens", 0) or 0) if last_response else 0
|
||||
completion_token_count = int(getattr(last_response, "generation_tokens", 0) or 0) if last_response else 0
|
||||
|
||||
logprobs_bytes = b""
|
||||
# Logprobs extraction — only when the request asked for them.
|
||||
if last_response is not None and int(getattr(request, "Logprobs", 0) or 0) > 0:
|
||||
try:
|
||||
lp = getattr(last_response, "logprobs", None)
|
||||
if lp is not None:
|
||||
# GenerationResponse.logprobs on the last chunk is the
|
||||
# logprob distribution of the final token. Without a
|
||||
# per-token history we at minimum surface the last token's
|
||||
# top-1 logprob so clients get a non-empty field.
|
||||
token_id = int(getattr(last_response, "token", 0) or 0)
|
||||
token_text = self.tokenizer.decode([token_id]) if token_id else ""
|
||||
top_logprob = float(lp[token_id]) if hasattr(lp, "__getitem__") else 0.0
|
||||
logprobs_bytes = json.dumps(
|
||||
{
|
||||
"content": [
|
||||
{"token": token_text, "logprob": top_logprob}
|
||||
]
|
||||
}
|
||||
).encode("utf-8")
|
||||
except Exception as e:
|
||||
print(f"[mlx] Logprobs extraction failed: {e}", file=sys.stderr)
|
||||
|
||||
return content, reasoning_content, tool_calls_proto, prompt_token_count, completion_token_count, logprobs_bytes
|
||||
|
||||
def _truncate_at_stop(self, text, stop_words):
|
||||
"""Truncate ``text`` at the first occurrence of any stop sequence."""
|
||||
if not stop_words:
|
||||
return text
|
||||
earliest = len(text)
|
||||
for stop in stop_words:
|
||||
if not stop:
|
||||
continue
|
||||
idx = text.find(stop)
|
||||
if idx >= 0 and idx < earliest:
|
||||
earliest = idx
|
||||
return text[:earliest] if earliest < len(text) else text
|
||||
|
||||
async def serve(address):
|
||||
# Start asyncio gRPC server
|
||||
|
|
|
|||
|
|
@ -1,11 +1,20 @@
|
|||
import os
|
||||
import sys
|
||||
import unittest
|
||||
import subprocess
|
||||
import time
|
||||
import types
|
||||
|
||||
import grpc
|
||||
import backend_pb2
|
||||
import backend_pb2_grpc
|
||||
|
||||
# Make the shared helpers importable so we can unit-test them without a
|
||||
# running gRPC server.
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'common'))
|
||||
from python_utils import messages_to_dicts, parse_options
|
||||
from mlx_utils import parse_tool_calls, split_reasoning
|
||||
|
||||
class TestBackendServicer(unittest.TestCase):
|
||||
"""
|
||||
TestBackendServicer is the class that tests the gRPC service.
|
||||
|
|
@ -231,4 +240,104 @@ class TestBackendServicer(unittest.TestCase):
|
|||
self.tearDown()
|
||||
|
||||
|
||||
def test_tokenize_string(self):
|
||||
"""TokenizeString should return a non-empty token list for a known prompt."""
|
||||
try:
|
||||
self.setUp()
|
||||
with grpc.insecure_channel("localhost:50051") as channel:
|
||||
stub = backend_pb2_grpc.BackendStub(channel)
|
||||
response = stub.LoadModel(
|
||||
backend_pb2.ModelOptions(Model="mlx-community/Llama-3.2-1B-Instruct-4bit")
|
||||
)
|
||||
self.assertTrue(response.success)
|
||||
resp = stub.TokenizeString(backend_pb2.PredictOptions(Prompt="Hello, world"))
|
||||
self.assertGreater(resp.length, 0)
|
||||
self.assertEqual(len(list(resp.tokens)), resp.length)
|
||||
except Exception as err:
|
||||
print(err)
|
||||
self.fail("TokenizeString service failed")
|
||||
finally:
|
||||
self.tearDown()
|
||||
|
||||
def test_free(self):
|
||||
"""Free should release the model and not crash on subsequent calls."""
|
||||
try:
|
||||
self.setUp()
|
||||
with grpc.insecure_channel("localhost:50051") as channel:
|
||||
stub = backend_pb2_grpc.BackendStub(channel)
|
||||
response = stub.LoadModel(
|
||||
backend_pb2.ModelOptions(Model="mlx-community/Llama-3.2-1B-Instruct-4bit")
|
||||
)
|
||||
self.assertTrue(response.success)
|
||||
free_resp = stub.Free(backend_pb2.HealthMessage())
|
||||
self.assertTrue(free_resp.success)
|
||||
except Exception as err:
|
||||
print(err)
|
||||
self.fail("Free service failed")
|
||||
finally:
|
||||
self.tearDown()
|
||||
|
||||
|
||||
class TestSharedHelpers(unittest.TestCase):
|
||||
"""Server-less unit tests for the helpers the mlx backend depends on."""
|
||||
|
||||
def test_parse_options_typed(self):
|
||||
opts = parse_options(["temperature:0.7", "max_tokens:128", "trust:true", "name:hello", "no_colon_skipped"])
|
||||
self.assertEqual(opts["temperature"], 0.7)
|
||||
self.assertEqual(opts["max_tokens"], 128)
|
||||
self.assertIs(opts["trust"], True)
|
||||
self.assertEqual(opts["name"], "hello")
|
||||
self.assertNotIn("no_colon_skipped", opts)
|
||||
|
||||
def test_messages_to_dicts_roundtrip(self):
|
||||
# Build proto Message objects (via backend_pb2 to match real gRPC)
|
||||
msgs = [
|
||||
backend_pb2.Message(role="user", content="hi"),
|
||||
backend_pb2.Message(
|
||||
role="assistant",
|
||||
content="",
|
||||
tool_calls='[{"id":"call_1","type":"function","function":{"name":"f","arguments":"{}"}}]',
|
||||
),
|
||||
backend_pb2.Message(
|
||||
role="tool",
|
||||
content="42",
|
||||
tool_call_id="call_1",
|
||||
name="f",
|
||||
),
|
||||
]
|
||||
out = messages_to_dicts(msgs)
|
||||
self.assertEqual(out[0], {"role": "user", "content": "hi"})
|
||||
self.assertEqual(out[1]["role"], "assistant")
|
||||
self.assertEqual(out[1]["tool_calls"][0]["function"]["name"], "f")
|
||||
self.assertEqual(out[2]["tool_call_id"], "call_1")
|
||||
self.assertEqual(out[2]["name"], "f")
|
||||
|
||||
def test_split_reasoning(self):
|
||||
r, c = split_reasoning("<think>step 1\nstep 2</think>The answer is 42.", "<think>", "</think>")
|
||||
self.assertEqual(r, "step 1\nstep 2")
|
||||
self.assertEqual(c, "The answer is 42.")
|
||||
|
||||
def test_split_reasoning_no_marker(self):
|
||||
r, c = split_reasoning("just text", "<think>", "</think>")
|
||||
self.assertEqual(r, "")
|
||||
self.assertEqual(c, "just text")
|
||||
|
||||
def test_parse_tool_calls_with_shim(self):
|
||||
tm = types.SimpleNamespace(
|
||||
tool_call_start="<tool_call>",
|
||||
tool_call_end="</tool_call>",
|
||||
parse_tool_call=lambda body, tools: {"name": "get_weather", "arguments": {"location": body.strip()}},
|
||||
)
|
||||
calls, remaining = parse_tool_calls(
|
||||
"Sure: <tool_call>Paris</tool_call>",
|
||||
tm,
|
||||
tools=None,
|
||||
)
|
||||
self.assertEqual(len(calls), 1)
|
||||
self.assertEqual(calls[0]["name"], "get_weather")
|
||||
self.assertEqual(calls[0]["arguments"], '{"location": "Paris"}')
|
||||
self.assertEqual(calls[0]["index"], 0)
|
||||
self.assertNotIn("<tool_call>", remaining)
|
||||
|
||||
|
||||
# Unit tests for ThreadSafeLRUPromptCache are in test_mlx_cache.py
|
||||
|
|
@ -1,3 +1,3 @@
|
|||
--extra-index-url https://download.pytorch.org/whl/rocm6.3
|
||||
--extra-index-url https://download.pytorch.org/whl/rocm7.0
|
||||
torch
|
||||
nemo_toolkit[asr]
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
--extra-index-url https://download.pytorch.org/whl/rocm6.4
|
||||
torch==2.8.0+rocm6.4
|
||||
--extra-index-url https://download.pytorch.org/whl/rocm7.0
|
||||
torch==2.10.0+rocm7.0
|
||||
transformers==4.56.1
|
||||
accelerate
|
||||
librosa==0.11.0
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
--extra-index-url https://download.pytorch.org/whl/rocm6.4
|
||||
torch==2.8.0+rocm6.4
|
||||
--extra-index-url https://download.pytorch.org/whl/rocm7.0
|
||||
torch==2.10.0+rocm7.0
|
||||
accelerate
|
||||
llvmlite==0.43.0
|
||||
numba==0.60.0
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue