Services like PPIO (71 models) and SiliconFlow (111 models) were
truncated to 50. Return the full list.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When discoveredModels is empty and probe succeeds, return the full
knownModels list (7 for MiniMax) instead of just the probed model.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
buildModelCandidates prioritized the global configModel (kimi-k2.5)
over service-specific models, causing MiniMax probe to try kimi-k2.5
on MiniMax's endpoint. Now uses knownModels[0] as preferredModel
for services that have them.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
f942c84 changed GET /models to call probeServiceCapabilities which does
chat completion testing — causing zhipu to take 97s and minimax 47s.
GET /models only needs the model list, not a full API health check.
Reverted to simple /models fetch + pi-ai fallback.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
MiniMax GET /models took 47s because it tried the Anthropic endpoint
(no /models) and waited for timeout. Services with knownModels now
return the hardcoded list immediately, skipping the network call.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Previously auditor and reviser always operated on the latest chapter.
Now the agent can pass an explicit chapterNumber so "重写第5章" targets
chapter 5 instead of always hitting the latest.
- Added chapterNumber to SubAgentParams schema
- auditor: pipeline.auditDraft(bookId, chapterNumber)
- reviser: pipeline.reviseDraft(bookId, chapterNumber, mode)
- Updated system prompt to document the parameter
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When the agent creates a book via sub_agent architect, the sidebar
didn't refresh because book:created was only broadcast from the
POST /books/create endpoint. Now the agent endpoint also broadcasts
this event when architect completes successfully.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add output format rules: no emoji, use bullet lists/tables for structured content
- Add chapter index management instructions so the agent can detect and fix
index.json inconsistencies (missing chapters, rewrites) using its own tools
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Test connection: remove slow chat completion, just validate via /models + fallback
- GET models: add pi-ai fallback when /models returns 404
- Agent fallback/probe: use empty baseUrl instead of config.llm.baseUrl to prevent
default service URL leaking into other services (e.g. moonshot URL used for minimax)
- Custom service URLs auto-normalize: append /v1 if missing (save, test, resolve)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
pi-ai's built-in model objects may have different baseUrl (e.g. international
endpoint) or api format (e.g. anthropic-messages) than our configured presets.
Always construct our own model object using preset values, only inheriting
metadata (reasoning, cost, contextWindow) from pi-ai.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- baseUrl from api.minimax.chat to api.minimaxi.com (current OpenAI-compatible endpoint)
- Add knownModels for MiniMax (7 models) since it doesn't support GET /models
- listModelsForService prioritizes knownModels over dynamic /models call
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>