mirror of
https://github.com/mudler/LocalAI
synced 2026-04-21 21:37:21 +00:00
fix: drop gguf VRAM estimation Cleanup. This is now handled directly in llama.cpp, no need to estimate from Go. VRAM estimation in general is tricky, but llama.cpp ( |
||
|---|---|---|
| .. | ||
| cpu.go | ||
| gpu.go | ||
| memory.go | ||