LocalAI/backend/python/kokoro
Ettore Di Giacinto cfd95745ed
feat: add cuda13 images (#7404)
* chore(ci): add cuda13 jobs

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add to pipelines and to capabilities. Start to work on the gallery

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* gallery

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* capabilities: try to detect by looking at /usr/local

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* neutts

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* backends.yaml

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* add cuda13 l4t requirements.txt

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* add cuda13 requirements.txt

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Pin vllm

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Not all backends are compatible

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* add vllm to requirements

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* vllm is not pre-compiled for cuda 13

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-02 14:24:35 +01:00
..
backend.py feat: return complete audio for kokoro (#6842) 2025-10-28 08:49:18 +01:00
install.sh feat: Add backend gallery (#5607) 2025-06-15 14:56:52 +02:00
Makefile feat(mlx): add mlx backend (#6049) 2025-08-22 08:42:29 +02:00
README.md feat(kokoro): complete kokoro integration (#5978) 2025-08-06 15:23:29 +02:00
requirements-cpu.txt feat(kokoro): complete kokoro integration (#5978) 2025-08-06 15:23:29 +02:00
requirements-cublas11.txt feat(kokoro): complete kokoro integration (#5978) 2025-08-06 15:23:29 +02:00
requirements-cublas12.txt feat(kokoro): complete kokoro integration (#5978) 2025-08-06 15:23:29 +02:00
requirements-cublas13.txt feat: add cuda13 images (#7404) 2025-12-02 14:24:35 +01:00
requirements-hipblas.txt feat(kokoro): complete kokoro integration (#5978) 2025-08-06 15:23:29 +02:00
requirements-intel.txt feat(kokoro): complete kokoro integration (#5978) 2025-08-06 15:23:29 +02:00
requirements-l4t12.txt feat: add cuda13 images (#7404) 2025-12-02 14:24:35 +01:00
requirements.txt feat(kokoro): complete kokoro integration (#5978) 2025-08-06 15:23:29 +02:00
run.sh feat: Add backend gallery (#5607) 2025-06-15 14:56:52 +02:00
test.py feat(kokoro): complete kokoro integration (#5978) 2025-08-06 15:23:29 +02:00
test.sh feat: Add backend gallery (#5607) 2025-06-15 14:56:52 +02:00

Kokoro TTS Backend for LocalAI

This is a gRPC server backend for LocalAI that uses the Kokoro TTS pipeline.

Creating a separate environment for kokoro project

make kokoro

Testing the gRPC server

make test

Features

  • Lightweight TTS model with 82 million parameters
  • Apache-licensed weights
  • Fast and cost-efficient
  • Multi-language support
  • Multiple voice options