mirror of
https://github.com/mudler/LocalAI
synced 2026-04-21 21:37:21 +00:00
fix: drop gguf VRAM estimation Cleanup. This is now handled directly in llama.cpp, no need to estimate from Go. VRAM estimation in general is tricky, but llama.cpp ( |
||
|---|---|---|
| .. | ||
| application_config.go | ||
| application_config_test.go | ||
| config_suite_test.go | ||
| gallery.go | ||
| gguf.go | ||
| guesser.go | ||
| model_config.go | ||
| model_config_filter.go | ||
| model_config_loader.go | ||
| model_config_test.go | ||
| model_test.go | ||
| runtime_settings.go | ||