LocalAI/backend/python/transformers
Ettore Di Giacinto 923ebbb344
feat(qwen-tts): add Qwen-tts backend (#8163)
* feat(qwen-tts): add Qwen-tts backend

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Update intel deps

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Drop flash-attn for cuda13

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-23 15:18:41 +01:00
..
backend.py feat(mlx): add mlx backend (#6049) 2025-08-22 08:42:29 +02:00
install.sh feat: Add backend gallery (#5607) 2025-06-15 14:56:52 +02:00
Makefile feat(mlx): add mlx backend (#6049) 2025-08-22 08:42:29 +02:00
README.md feat(transformers): add embeddings with Automodel (#1308) 2023-11-20 21:21:17 +01:00
requirements-cpu.txt chore(deps): bump protobuf from 6.33.2 to 6.33.4 in /backend/python/transformers (#7993) 2026-01-12 23:46:32 +00:00
requirements-cublas12.txt chore(deps): bump protobuf from 6.33.2 to 6.33.4 in /backend/python/transformers (#7993) 2026-01-12 23:46:32 +00:00
requirements-cublas13.txt chore(deps): bump protobuf from 6.33.2 to 6.33.4 in /backend/python/transformers (#7993) 2026-01-12 23:46:32 +00:00
requirements-hipblas.txt chore(deps): bump protobuf from 6.33.2 to 6.33.4 in /backend/python/transformers (#7993) 2026-01-12 23:46:32 +00:00
requirements-intel.txt feat(qwen-tts): add Qwen-tts backend (#8163) 2026-01-23 15:18:41 +01:00
requirements.txt chore(deps): bump protobuf from 6.33.2 to 6.33.4 in /backend/python/transformers (#7993) 2026-01-12 23:46:32 +00:00
run.sh feat: Add backend gallery (#5607) 2025-06-15 14:56:52 +02:00
test.py feat(transformers): merge sentencetransformers backend (#4624) 2025-01-18 18:30:30 +01:00
test.sh feat: Add backend gallery (#5607) 2025-06-15 14:56:52 +02:00

Creating a separate environment for the transformers project

make transformers