LocalAI/pkg/xsysinfo
Ettore Di Giacinto 800f749c7b
fix: drop gguf VRAM estimation (now redundant) (#8325)
fix: drop gguf VRAM estimation

Cleanup. This is now handled directly in llama.cpp, no need to estimate from Go.

VRAM estimation in general is tricky, but llama.cpp ( 41ea26144e/src/llama.cpp (L168) ) lately has added an automatic "fitting" of models to VRAM, so we can drop backend-specific GGUF VRAM estimation from our code instead of trying to guess as we already enable it

 397f7f0862/backend/cpp/llama-cpp/grpc-server.cpp (L393)

Fixes: https://github.com/mudler/LocalAI/issues/8302
See: https://github.com/mudler/LocalAI/issues/8302#issuecomment-3830773472
2026-02-01 17:33:28 +01:00
..
cpu.go feat(default): use number of physical cores as default (#2483) 2024-06-04 15:23:29 +02:00
gpu.go chore: drop noisy logs (#8142) 2026-01-21 09:52:20 +01:00
memory.go chore: drop noisy logs (#8142) 2026-01-21 09:52:20 +01:00