mirror of
https://github.com/mudler/LocalAI
synced 2026-04-21 21:37:21 +00:00
* feat(mlx-distributed): add new MLX-distributed backend Add new MLX distributed backend with support for both TCP and RDMA for model sharding. This implementation ties in the discovery implementation already in place, and re-uses the same P2P mechanism for the TCP MLX-distributed inferencing. The Auto-parallel implementation is inspired by Exo's ones (who have been added to acknowledgement for the great work!) Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * expose a CLI to facilitate backend starting Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * feat: make manual rank0 configurable via model configs Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Add missing features from mlx backend Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Apply suggestion from @mudler Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| backend.py | ||
| coordinator.py | ||
| install.sh | ||
| Makefile | ||
| mlx_cache.py | ||
| requirements-cpu.txt | ||
| requirements-cublas12.txt | ||
| requirements-cublas13.txt | ||
| requirements-l4t12.txt | ||
| requirements-l4t13.txt | ||
| requirements-mps.txt | ||
| requirements.txt | ||
| run.sh | ||
| sharding.py | ||
| test.py | ||
| test.sh | ||