## Summary
This PR improves type safety across the codebase by replacing generic
`any` types with proper TypeScript types, removes unnecessary record
store operations, and adds TODO comments for future refactoring of
useEffect hooks.
## Key Changes
### Type Safety Improvements
- **SettingsAgentTurnDetail.tsx**: Replaced `any` type annotations with
proper `AgentMessage` type from generated GraphQL types
- **useCreateManyRecords.ts**: Added `RecordGqlNode` type for better
type safety when handling mutation responses
- **useLazyFindOneRecord.ts**: Replaced generic `Record<string, any>`
with `Record<string, RecordGqlNode>` for improved type checking
### Removed Unnecessary Operations
- **EventCardCalendarEvent.tsx**: Removed unused
`useUpsertRecordsInStore` hook and its associated useEffect that was
upserting calendar event records to the store
- **EventCardMessage.tsx**: Removed unused `useUpsertRecordsInStore`
hook and its associated useEffect that was upserting message records to
the store
### Conditional Query Execution
- **useLoadCurrentUser.ts**: Made the `FindAllCoreViewsDocument` query
conditional - only executes when `isOnAWorkspace` is true, preventing
unnecessary queries for users not on a workspace
### Documentation
- Added TODO comments in multiple files (`useAgentChatData.ts`,
`useWorkspaceFromInviteHash.ts`, `useGetPublicWorkspaceDataByDomain.ts`,
`useFindManyRecords.ts`, `useSingleRecordPickerPerformSearch.ts`)
referencing PR #18584 for future refactoring of useEffect hooks to avoid
unnecessary re-renders
## Implementation Details
- The removal of store upsert operations suggests these records are
already being managed elsewhere or the operations were redundant
- Type improvements maintain backward compatibility while providing
better IDE support and compile-time checking
- Conditional query execution reduces unnecessary network requests and
improves performance for non-workspace users
https://claude.ai/code/session_01YQErkoHotMvM6VL3JkWAqV
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
This PR upgrades Apollo Client from v3.10.0 to v4 and refactors error
handling patterns across the codebase to use a new centralized
`useSnackBarOnQueryError` hook.
## Key Changes
- **Dependency Update**: Upgraded `@apollo/client` from `^3.10.0` to
`^3.11.0` in root package.json
- **New Hook**: Added `useSnackBarOnQueryError` hook for centralized
Apollo query error handling with snack bar notifications
- **Error Handling Refactor**: Updated 100+ files to use the new error
handling pattern:
- Removed direct `ApolloError` imports where no longer needed
- Replaced manual error handling logic with `useSnackBarOnQueryError`
hook
- Simplified error handling in hooks and components across multiple
modules
- **GraphQL Codegen**: Updated codegen configuration files to work with
Apollo Client v3.11.0
- **Type Definitions**: Added TypeScript declaration file for
`apollo-upload-client` module
- **Test Updates**: Updated test files to reflect new error handling
patterns
## Notable Implementation Details
- The new `useSnackBarOnQueryError` hook provides a consistent way to
handle Apollo query errors with automatic snack bar notifications
- Changes span across multiple feature areas: auth, object records,
settings, workflows, billing, and more
- All changes maintain backward compatibility while improving code
maintainability and reducing duplication
- Jest configuration updated to work with the new Apollo Client version
https://claude.ai/code/session_019WGZ6Rd7sEHuBg9sTrXRqJ
---------
Co-authored-by: Claude <noreply@anthropic.com>
This PR fixes what allows to have a working demo workspace skill.
- Skill updated many times into something that works
- Fixed infinite loop in AI chat by memoizing ai-sdk output
- Finished navigateToView implementation
- Increased MAX_STEPS to 300 so the chat don't quit in the middle of a
long running skill
- Added CreateManyRelationFields
## Summary
Removes the `recoil` dependency entirely from `package.json` and
`twenty-front/package.json`, completing the migration to Jotai as the
sole state management library.
Removes all Recoil infrastructure: `RecoilRoot` wrapper from `App.tsx`
and test decorators, `RecoilDebugObserver`, Recoil-specific ESLint rules
(`use-getLoadable-and-getValue-to-get-atoms`,
`useRecoilCallback-has-dependency-array`), and legacy Recoil utility
hooks/types (`useRecoilComponentState`, `useRecoilComponentValue`,
`createComponentState`, `createFamilyState`, `getSnapshotValue`,
`cookieStorageEffect`, `localStorageEffect`, etc.).
Renames all `V2`-suffixed Jotai state files and types to their canonical
names (e.g., `ComponentStateV2` -> `ComponentState`,
`agentChatInputStateV2` -> `agentChatInputState`, `SelectorCallbacksV2`
-> `SelectorCallbacks`), and removes the now-redundant V1 counterparts.
Updates ~433 files across the codebase to use the renamed Jotai imports,
remove Recoil imports, and clean up test wrappers (`RecoilRootDecorator`
-> `JotaiRootDecorator`).
## Summary
Replaces the static "Ask AI" header in the command menu with the
conversation’s auto-generated title once it’s set after the first
message.
## Changes
- **Backend:** Title is generated after the first user message (existing
behavior).
- **Frontend:** After the first stream completes, we fetch the thread
title and sync it to:
- `currentAIChatThreadTitleState` (persists across command menu
close/reopen)
- Command menu page info and navigation stack (so the title survives
back navigation)
- **Entry points:** Opening Ask AI from the left nav or command center
uses the same title resolution (explicit `pageTitle` → current thread
title → "Ask AI" fallback).
- **Race fix:** Title sync only runs when the thread that finished
streaming is still the active thread, so switching threads mid-stream
doesn’t overwrite the current thread’s title.
---------
Co-authored-by: Félix Malfait <felix@twenty.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
⚠️ **AI-generated PR — not ready for review** ⚠️
cc @FelixMalfait
---
## Changes
### System prompt improvements
- Explicit skill-before-tools workflow to prevent the model from calling
tools without loading the matching skill first
- Data efficiency guidance (default small limits, use filters)
- Pluralized `load_skill` → `load_skills` for consistency with
`load_tools`
### Token usage reduction
- Output serialization layer: strips null/undefined/empty values from
tool results
- Lowered default `find_*` limit from 100 → 10, max from 1000 → 100
### System object tool generation
- System objects (calendar events, messages, etc.) now generate AI tools
- Only workflow-related and favorite-related objects are excluded
### Context window display fix
- **Bug**: UI compared cumulative tokens (sum of all turns) against
single-request context window → showed 100% after a few turns
- **Fix**: Track `conversationSize` (last step's `inputTokens`) which
represents the actual conversation history size sent to the model
- New `conversationSize` column on thread entity with migration
### Workspace AI instructions
- Support for custom workspace-level AI instructions
---------
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
## Summary
Fixes an issue where the AI chat would show loading shimmers
indefinitely when opened on a workspace with no conversation history.
**Root cause:** The `isLoading` state in `useAgentChat` included
`!currentAIChatThread`. On workspaces with no chat threads,
`currentAIChatThread` remained `null`, causing `isLoading` to be
permanently `true`.
**Changes:**
- Remove `!currentAIChatThread` from `isLoading` calculation in
`useAgentChat` - this state should only reflect streaming/file selection
status
- Auto-create a chat thread in `useAgentChatData` when the threads query
returns empty, ensuring a valid thread exists for the `useChat` hook to
initialize properly
- Add primary font color to empty state title for better visibility
## Test plan
1. Create a new workspace or use a workspace with no AI chat history
2. Open the AI chat
3. Verify the empty state shows (not infinite loading shimmer)
4. Send a message and verify it works correctly
Made with [Cursor](https://cursor.com)
---------
Co-authored-by: Cursor <cursoragent@cursor.com>
## Summary
- Add a context usage indicator to the AI chat interface inspired by
Vercel's AI SDK Context component
- Display token consumption, context window utilization percentage, and
estimated cost in credits
- Show a circular progress ring with percentage, revealing detailed
breakdown on hover
## Changes
### Backend
- Stream usage metadata (tokens, model config) via `messageMetadata`
callback in `agent-chat-streaming.service.ts`
- Return model config from `chat-execution.service.ts`
- Add usage and model types to `ExtendedUIMessage` metadata
### Frontend
- New `ContextUsageProgressRing` component - circular SVG progress
indicator
- New `AIChatContextUsageButton` component with hover card showing:
- Progress bar with used/total tokens
- Input/output token counts with credit costs
- Total credits consumed
- Track cumulative usage in Recoil state (`agentChatUsageState`)
- Reset usage when creating new chat thread
- Integrate button into `AIChatTab`
## Test plan
- [ ] Open AI chat and send a message
- [ ] Verify the context usage button appears with percentage
- [ ] Hover over the button to see detailed breakdown
- [ ] Verify credits are calculated correctly
- [ ] Create a new chat thread and verify usage resets to 0
Adds intelligent routing system that automatically selects the best
agent for user queries based on conversation context.
### Changes:
- Added `routerModel` column to workspace table for configurable router
LLM selection
- Implemented `RouterService` with conversation history analysis and
agent matching logic
- Created router settings UI in AI Settings page with model dropdown
- Removed agent-specific thread associations - threads are now
agent-agnostic
- Added real-time routing status notification in chat UI with shimmer
effect
- Removed automatic default assistant agent creation
- Renamed GraphQL operations from agent-specific to generic (e.g.,
`agentChatThreads` → `chatThreads`)
---------
Co-authored-by: Félix Malfait <felix.malfait@gmail.com>
Co-authored-by: Félix Malfait <felix@twenty.com>
Closes [#1583](https://github.com/twentyhq/core-team-issues/issues/1583)
- Removed `messageId` column from `File` table and its references in
code (unrelated to this PR)
- Updated AI chat to use React Context API with persistent provider in
`CommandMenuContainer`
- Converted all AI chat component states to regular Recoil atoms (no
instance context needed)
---------
Co-authored-by: Félix Malfait <felix@twenty.com>