This PR fixes what allows to have a working demo workspace skill.
- Skill updated many times into something that works
- Fixed infinite loop in AI chat by memoizing ai-sdk output
- Finished navigateToView implementation
- Increased MAX_STEPS to 300 so the chat don't quit in the middle of a
long running skill
- Added CreateManyRelationFields
## Summary
Removes the `recoil` dependency entirely from `package.json` and
`twenty-front/package.json`, completing the migration to Jotai as the
sole state management library.
Removes all Recoil infrastructure: `RecoilRoot` wrapper from `App.tsx`
and test decorators, `RecoilDebugObserver`, Recoil-specific ESLint rules
(`use-getLoadable-and-getValue-to-get-atoms`,
`useRecoilCallback-has-dependency-array`), and legacy Recoil utility
hooks/types (`useRecoilComponentState`, `useRecoilComponentValue`,
`createComponentState`, `createFamilyState`, `getSnapshotValue`,
`cookieStorageEffect`, `localStorageEffect`, etc.).
Renames all `V2`-suffixed Jotai state files and types to their canonical
names (e.g., `ComponentStateV2` -> `ComponentState`,
`agentChatInputStateV2` -> `agentChatInputState`, `SelectorCallbacksV2`
-> `SelectorCallbacks`), and removes the now-redundant V1 counterparts.
Updates ~433 files across the codebase to use the renamed Jotai imports,
remove Recoil imports, and clean up test wrappers (`RecoilRootDecorator`
-> `JotaiRootDecorator`).
## Summary
Replaces the static "Ask AI" header in the command menu with the
conversation’s auto-generated title once it’s set after the first
message.
## Changes
- **Backend:** Title is generated after the first user message (existing
behavior).
- **Frontend:** After the first stream completes, we fetch the thread
title and sync it to:
- `currentAIChatThreadTitleState` (persists across command menu
close/reopen)
- Command menu page info and navigation stack (so the title survives
back navigation)
- **Entry points:** Opening Ask AI from the left nav or command center
uses the same title resolution (explicit `pageTitle` → current thread
title → "Ask AI" fallback).
- **Race fix:** Title sync only runs when the thread that finished
streaming is still the active thread, so switching threads mid-stream
doesn’t overwrite the current thread’s title.
---------
Co-authored-by: Félix Malfait <felix@twenty.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
⚠️ **AI-generated PR — not ready for review** ⚠️
cc @FelixMalfait
---
## Changes
### System prompt improvements
- Explicit skill-before-tools workflow to prevent the model from calling
tools without loading the matching skill first
- Data efficiency guidance (default small limits, use filters)
- Pluralized `load_skill` → `load_skills` for consistency with
`load_tools`
### Token usage reduction
- Output serialization layer: strips null/undefined/empty values from
tool results
- Lowered default `find_*` limit from 100 → 10, max from 1000 → 100
### System object tool generation
- System objects (calendar events, messages, etc.) now generate AI tools
- Only workflow-related and favorite-related objects are excluded
### Context window display fix
- **Bug**: UI compared cumulative tokens (sum of all turns) against
single-request context window → showed 100% after a few turns
- **Fix**: Track `conversationSize` (last step's `inputTokens`) which
represents the actual conversation history size sent to the model
- New `conversationSize` column on thread entity with migration
### Workspace AI instructions
- Support for custom workspace-level AI instructions
---------
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
## Summary
Fixes an issue where the AI chat would show loading shimmers
indefinitely when opened on a workspace with no conversation history.
**Root cause:** The `isLoading` state in `useAgentChat` included
`!currentAIChatThread`. On workspaces with no chat threads,
`currentAIChatThread` remained `null`, causing `isLoading` to be
permanently `true`.
**Changes:**
- Remove `!currentAIChatThread` from `isLoading` calculation in
`useAgentChat` - this state should only reflect streaming/file selection
status
- Auto-create a chat thread in `useAgentChatData` when the threads query
returns empty, ensuring a valid thread exists for the `useChat` hook to
initialize properly
- Add primary font color to empty state title for better visibility
## Test plan
1. Create a new workspace or use a workspace with no AI chat history
2. Open the AI chat
3. Verify the empty state shows (not infinite loading shimmer)
4. Send a message and verify it works correctly
Made with [Cursor](https://cursor.com)
---------
Co-authored-by: Cursor <cursoragent@cursor.com>
## Summary
- Add a context usage indicator to the AI chat interface inspired by
Vercel's AI SDK Context component
- Display token consumption, context window utilization percentage, and
estimated cost in credits
- Show a circular progress ring with percentage, revealing detailed
breakdown on hover
## Changes
### Backend
- Stream usage metadata (tokens, model config) via `messageMetadata`
callback in `agent-chat-streaming.service.ts`
- Return model config from `chat-execution.service.ts`
- Add usage and model types to `ExtendedUIMessage` metadata
### Frontend
- New `ContextUsageProgressRing` component - circular SVG progress
indicator
- New `AIChatContextUsageButton` component with hover card showing:
- Progress bar with used/total tokens
- Input/output token counts with credit costs
- Total credits consumed
- Track cumulative usage in Recoil state (`agentChatUsageState`)
- Reset usage when creating new chat thread
- Integrate button into `AIChatTab`
## Test plan
- [ ] Open AI chat and send a message
- [ ] Verify the context usage button appears with percentage
- [ ] Hover over the button to see detailed breakdown
- [ ] Verify credits are calculated correctly
- [ ] Create a new chat thread and verify usage resets to 0
Adds intelligent routing system that automatically selects the best
agent for user queries based on conversation context.
### Changes:
- Added `routerModel` column to workspace table for configurable router
LLM selection
- Implemented `RouterService` with conversation history analysis and
agent matching logic
- Created router settings UI in AI Settings page with model dropdown
- Removed agent-specific thread associations - threads are now
agent-agnostic
- Added real-time routing status notification in chat UI with shimmer
effect
- Removed automatic default assistant agent creation
- Renamed GraphQL operations from agent-specific to generic (e.g.,
`agentChatThreads` → `chatThreads`)
---------
Co-authored-by: Félix Malfait <felix.malfait@gmail.com>
Co-authored-by: Félix Malfait <felix@twenty.com>
Closes [#1583](https://github.com/twentyhq/core-team-issues/issues/1583)
- Removed `messageId` column from `File` table and its references in
code (unrelated to this PR)
- Updated AI chat to use React Context API with persistent provider in
`CommandMenuContainer`
- Converted all AI chat component states to regular Recoil atoms (no
instance context needed)
---------
Co-authored-by: Félix Malfait <felix@twenty.com>