mirror of
https://github.com/h3pdesign/Neon-Vision-Editor
synced 2026-04-21 21:37:17 +00:00
- Wire up Grok (xAI) and Gemini (Google) API calls inside generateModelCompletion(...) - Grok: POST https://api.x.ai/v1/chat/completions with OpenAI-style messages - Gemini: POST v1beta/models/<model>:generateContent with contents/parts payload - Reuse existing prompt and sanitizeCompletion(_:) to keep output concise - Respect missing/empty tokens by short‑circuiting to no-op - Ensure these providers are only used by inline completion, already gated by isAutoCompletionEnabled - AIClientFactory: make switch exhaustive by adding `.anthropic` case (returns nil for now) - Anthropic is handled directly in ContentView’s inline completion path Notes: - No changes to other features or UI flows - OpenAI and Anthropic implementations remain as previously added - Gemini model used: gemini-1.5-flash-latest - Grok model used: grok-2-latest Testing: - Toggle code completion with the toolbar button; when disabled, no API calls occur - With tokens set, select Grok/Gemini and type to trigger suggestions - Verify suggestions insert without duplication and are trimmed via sanitizeCompletion - Build to confirm AIClientFactory switch error is resolved |
||
|---|---|---|
| .. | ||
| project.xcworkspace | ||
| xcshareddata/xcschemes | ||
| xcuserdata/h3p.xcuserdatad/xcschemes | ||
| project.pbxproj | ||