LocalAI/pkg/reasoning
Ettore Di Giacinto 53deeb1107 fix(reasoning): suppress partial tag tokens during autoparser warm-up
The C++ PEG parser needs a few tokens to identify the reasoning format
(e.g. "<|channel>thought\n" for Gemma 4). During this warm-up, the gRPC
layer was sending raw partial tag tokens to Go, which leaked into the
reasoning field.

- Clear reply.message in gRPC when autoparser is active but has no diffs
  yet, matching llama.cpp server behavior of only emitting classified output
- Prefer C++ autoparser chat deltas for reasoning/content in all streaming
  paths, falling back to Go-side extraction for backends without autoparser
  (e.g. vLLM)
- Override non-streaming no-tools result with chat delta content when available
- Guard PrependThinkingTokenIfNeeded against partial tag prefixes during
  streaming accumulation
- Reorder default thinking tokens so <|channel>thought is checked before
  <|think|> (Gemma 4 templates contain both)
2026-04-04 20:45:57 +00:00
..
config.go feat(openresponses): Support reasoning blocks (#8133) 2026-01-21 00:11:45 +01:00
extractor.go fix(reasoning): accumulate and strip reasoning tags from autoparser results (#9227) 2026-04-04 18:15:32 +02:00
extractor_test.go fix(reasoning): accumulate and strip reasoning tags from autoparser results (#9227) 2026-04-04 18:15:32 +02:00
reasoning.go fix(reasoning): suppress partial tag tokens during autoparser warm-up 2026-04-04 20:45:57 +00:00
reasoning_suite_test.go fix(reasoning): support models with reasoning without starting thinking tag (#8132) 2026-01-20 21:07:59 +01:00
reasoning_test.go feat(gemma4): add thinking support (#9221) 2026-04-04 12:11:38 +02:00