LocalAI/backend/python/mlx-vlm
Ettore Di Giacinto 9621edb4c5
feat(diffusers): add support for wan2.2 (#6153)
* feat(diffusers): add support for wan2.2

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore(ci): use ttl.sh for PRs

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add ftfy deps

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Revert "chore(ci): use ttl.sh for PRs"

This reverts commit c9fc3ecf28.

* Simplify

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore: do not pin torch/torchvision on cuda12

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-28 10:26:42 +02:00
..
backend.py feat(diffusers): add support for wan2.2 (#6153) 2025-08-28 10:26:42 +02:00
install.sh Add mlx-vlm (#6119) 2025-08-23 23:05:30 +02:00
Makefile Add mlx-vlm (#6119) 2025-08-23 23:05:30 +02:00
requirements-mps.txt Add mlx-vlm (#6119) 2025-08-23 23:05:30 +02:00
requirements.txt Add mlx-vlm (#6119) 2025-08-23 23:05:30 +02:00
run.sh Add mlx-vlm (#6119) 2025-08-23 23:05:30 +02:00
test.py Add mlx-vlm (#6119) 2025-08-23 23:05:30 +02:00
test.sh Add mlx-vlm (#6119) 2025-08-23 23:05:30 +02:00