waveterm/pkg/aiusechat
Copilot edc20f7ec0
Bring Anthropic usechat backend to OpenAI-level tool-use parity and stream robustness (#2971)
This updates `pkg/aiusechat/anthropic` from partial implementation to
full backend parity for core tool-use orchestration and stream behavior.
The main gaps were unimplemented tool lifecycle methods, missing
persisted tool-use UI state, and weaker disconnect/error handling versus
the OpenAI backend.

- **Tool-use lifecycle parity (critical path)**
  - Implemented Anthropic backend support for:
    - `UpdateToolUseData`
    - `RemoveToolUseCall`
    - `GetFunctionCallInputByToolCallId`
- Wired `pkg/aiusechat/usechat-backend.go` to call Anthropic
implementations instead of stubs.
- Added Anthropic run-step nil-message guard so `nil` responses are not
wrapped into `[]GenAIMessage{nil}`.

- **Persisted tool-use state in Anthropic native messages**
  - Added internal `ToolUseData` storage on Anthropic `tool_use` blocks.
- Ensured internal-only fields are stripped before API requests via
`Clean()`.

- **UI conversion parity for reloaded history**
- Extended `ConvertToUIMessage()` to emit `data-tooluse` parts when
tool-use metadata exists, in addition to `tool-{name}` parts.

- **Streaming UX parity for tool argument deltas**
  - Added `aiutil.SendToolProgress(...)` calls during:
    - `input_json_delta` (incremental updates)
    - `content_block_stop` for `tool_use` (final update)

- **Disconnect/stream robustness**
  - Added `sse.Err()` checks in event handling and decode-error path.
- Added partial-text extraction on client disconnect and deterministic
ordering of partial blocks.
- Cleans up completed blocks from in-flight state to avoid duplicate
partial extraction.

- **Correctness + hygiene alignment**
- Continuation model checks now use `AreModelsCompatible(...)` (instead
of strict string equality).
- Added hostname sanitization in Anthropic error paths (HTTP error
parsing and `httpClient.Do` failures).
- Replaced unconditional Anthropic debug `log.Printf` calls with
`logutil.DevPrintf`.

- **Targeted coverage additions**
  - Added Anthropic tests for:
    - function-call lookup by tool call id
    - tool-use data update + removal
    - `data-tooluse` UI conversion behavior

```go
// usechat-backend.go
func (b *anthropicBackend) RunChatStep(...) (..., []uctypes.GenAIMessage, ...) {
    stopReason, msg, rateLimitInfo, err := anthropic.RunAnthropicChatStep(ctx, sseHandler, chatOpts, cont)
    if msg == nil {
        return stopReason, nil, rateLimitInfo, err
    }
    return stopReason, []uctypes.GenAIMessage{msg}, rateLimitInfo, err
}
```

<!-- START COPILOT CODING AGENT TIPS -->
---

 Let Copilot coding agent [set things up for
you](https://github.com/wavetermdev/waveterm/issues/new?title=+Set+up+Copilot+instructions&body=Configure%20instructions%20for%20this%20repository%20as%20documented%20in%20%5BBest%20practices%20for%20Copilot%20coding%20agent%20in%20your%20repository%5D%28https://gh.io/copilot-coding-agent-tips%29%2E%0A%0A%3COnboard%20this%20repo%3E&assignees=copilot)
— coding agent works faster and does higher quality work when set up for
your repo.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: sawka <2722291+sawka@users.noreply.github.com>
Co-authored-by: sawka <mike@commandline.dev>
2026-03-04 16:20:50 -08:00
..
aiutil Centralize proxy HTTP client creation in aiutil and remove redundant backend tests (#2961) 2026-03-02 11:16:50 -08:00
anthropic Bring Anthropic usechat backend to OpenAI-level tool-use parity and stream robustness (#2971) 2026-03-04 16:20:50 -08:00
chatstore Implement AI "stop" -- in the client, open ai responses/chat, and gemini backends (#2704) 2025-12-19 16:59:31 -08:00
gemini Centralize proxy HTTP client creation in aiutil and remove redundant backend tests (#2961) 2026-03-02 11:16:50 -08:00
google Add Google AI file summarization package (#2455) 2025-10-17 17:24:06 -07:00
openai Centralize proxy HTTP client creation in aiutil and remove redundant backend tests (#2961) 2026-03-02 11:16:50 -08:00
openaichat Centralize proxy HTTP client creation in aiutil and remove redundant backend tests (#2961) 2026-03-02 11:16:50 -08:00
uctypes Centralize proxy HTTP client creation in aiutil and remove redundant backend tests (#2961) 2026-03-02 11:16:50 -08:00
toolapproval.go fix tool approval lifecycle to match SSE connection, not keep-alives (#2693) 2025-12-18 16:43:59 -08:00
tools.go Add Google Gemini backend for AI chat (#2602) 2025-12-05 12:43:42 -08:00
tools_builder.go more builder updates (#2553) 2025-11-13 22:47:46 -08:00
tools_readdir.go implement openai chat completions api -- enables local model support (#2600) 2025-11-26 11:43:19 -08:00
tools_readdir_test.go better tool input descriptions (#2507) 2025-11-02 12:33:59 -08:00
tools_readfile.go implement openai chat completions api -- enables local model support (#2600) 2025-11-26 11:43:19 -08:00
tools_screenshot.go implement openai chat completions api -- enables local model support (#2600) 2025-11-26 11:43:19 -08:00
tools_term.go better tool input descriptions (#2507) 2025-11-02 12:33:59 -08:00
tools_tsunami.go fix tsunami scaffold in build (#2564) 2025-11-14 16:35:37 -08:00
tools_web.go better tool input descriptions (#2507) 2025-11-02 12:33:59 -08:00
tools_writefile.go implement openai chat completions api -- enables local model support (#2600) 2025-11-26 11:43:19 -08:00
usechat-backend.go Bring Anthropic usechat backend to OpenAI-level tool-use parity and stream robustness (#2971) 2026-03-04 16:20:50 -08:00
usechat-mode.go Add groq AI mode provider defaults and docs (#2942) 2026-02-26 09:50:05 -08:00
usechat-prompts.go minor v0.13 fixes (#2649) 2025-12-08 21:58:54 -08:00
usechat-utils.go Create Interface for Backend AI Providers (#2572) 2025-11-19 11:38:56 -08:00
usechat.go Centralize proxy HTTP client creation in aiutil and remove redundant backend tests (#2961) 2026-03-02 11:16:50 -08:00
usechat_mode_test.go Centralize proxy HTTP client creation in aiutil and remove redundant backend tests (#2961) 2026-03-02 11:16:50 -08:00