LocalAI/backend/python/vllm/requirements-cublas12.txt

4 lines
49 B
Text
Raw Normal View History

accelerate
torch==2.7.0
transformers
bitsandbytes