| .. |
|
gen_inference_defaults
|
feat: inferencing default, automatic tool parsing fallback and wire min_p (#9092)
|
2026-03-22 00:57:15 +01:00 |
|
meta
|
feat(ui): Interactive model config editor with autocomplete (#9149)
|
2026-04-07 14:42:23 +02:00 |
|
application_config.go
|
feat(ux): backend management enhancement (#9325)
|
2026-04-12 00:35:22 +02:00 |
|
application_config_test.go
|
feat: backend versioning, upgrade detection and auto-upgrade (#9315)
|
2026-04-11 22:31:15 +02:00 |
|
backend_hooks.go
|
feat(vllm): parity with llama.cpp backend (#9328)
|
2026-04-13 11:00:29 +02:00 |
|
config_suite_test.go
|
dependencies(grpcio): bump to fix CI issues (#2362)
|
2024-05-21 14:33:47 +02:00 |
|
distributed_config.go
|
feat: add distributed mode (#9124)
|
2026-03-30 00:47:27 +02:00 |
|
gallery.go
|
refactor: gallery inconsistencies (#2647)
|
2024-06-24 17:32:12 +02:00 |
|
gguf.go
|
feat: inferencing default, automatic tool parsing fallback and wire min_p (#9092)
|
2026-03-22 00:57:15 +01:00 |
|
hooks_llamacpp.go
|
feat(vllm): parity with llama.cpp backend (#9328)
|
2026-04-13 11:00:29 +02:00 |
|
hooks_test.go
|
feat(vllm): parity with llama.cpp backend (#9328)
|
2026-04-13 11:00:29 +02:00 |
|
hooks_vllm.go
|
feat(vllm): parity with llama.cpp backend (#9328)
|
2026-04-13 11:00:29 +02:00 |
|
inference_defaults.go
|
feat: inferencing default, automatic tool parsing fallback and wire min_p (#9092)
|
2026-03-22 00:57:15 +01:00 |
|
inference_defaults.json
|
chore: bump inference defaults from unsloth (#9396)
|
2026-04-17 09:05:55 +02:00 |
|
inference_defaults_test.go
|
feat: inferencing default, automatic tool parsing fallback and wire min_p (#9092)
|
2026-03-22 00:57:15 +01:00 |
|
model_config.go
|
feat(vllm): parity with llama.cpp backend (#9328)
|
2026-04-13 11:00:29 +02:00 |
|
model_config_filter.go
|
feat: add distributed mode (#9124)
|
2026-03-30 00:47:27 +02:00 |
|
model_config_loader.go
|
feat: add distributed mode (#9124)
|
2026-03-30 00:47:27 +02:00 |
|
model_config_test.go
|
fix(realtime): Use user provided voice and allow pipeline models to have no backend (#8415)
|
2026-02-11 14:18:05 +01:00 |
|
model_test.go
|
fix(realtime): Use user provided voice and allow pipeline models to have no backend (#8415)
|
2026-02-11 14:18:05 +01:00 |
|
parser_defaults.json
|
feat(vllm): parity with llama.cpp backend (#9328)
|
2026-04-13 11:00:29 +02:00 |
|
runtime_settings.go
|
feat(ux): backend management enhancement (#9325)
|
2026-04-12 00:35:22 +02:00 |