## Summary
- Add 2.0.0 release notes and illustrations to `twenty-website-new`
- Remove old release mdx files from the legacy `twenty-website`
- Fix the resources-menu "Releases" preview to pull the latest release
dynamically, using the first (hero) image of the latest mdx
## Test plan
- [ ] Open any page on `twenty-website-new`, hover "Resources" → the
Releases preview shows "See what shipped in 2.0.0" with the Build-an-app
hero illustration
- [ ] Visit `/releases` and confirm 2.0.0 renders with all five sections
and images
- [ ] Confirm legacy `twenty-website` no longer ships the deleted
release pages
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
## Summary
- Fix side panel hotkeys (Ctrl+K, Escape, etc.) breaking when opening
records from the record index table
- Ensure `side-panel-focus` is always restored in the focus stack when
navigating within an already-open side panel
- Remove stale `globalHotkeysConfig` on `record-index` focus item that
persisted after the side panel closed
## Problem
When clicking records in the table to open them in the side panel,
`useLeaveTableFocus` called `resetFocusStackToRecordIndex` which wiped
the entire focus stack, including the `side-panel-focus` entry. Since
`openSidePanel` early-returned when the panel was already open,
`side-panel-focus` was never restored. Additionally,
`resetFocusStackToRecordIndex` set `enableGlobalHotkeysWithModifiers:
false` on the remaining `record-index` item when the side panel was
open, and this stale config persisted after the panel closed,
permanently blocking all hotkeys.
## Fix
- **`useNavigateSidePanel.ts`**: Move `pushFocusItemToFocusStack` before
the `isSidePanelOpened` early-return so the side panel's focus entry is
always present in the stack
- **`useResetFocusStackToRecordIndex.ts`**: Always set
`enableGlobalHotkeysWithModifiers: true` on the `record-index` item.
Hotkey scoping when the side panel is open is handled by
`side-panel-focus` sitting on top of the focus stack
https://github.com/user-attachments/assets/ad25befb-338d-4166-9580-18d4e92d6f9b
## Summary
- AI is now GA, so the public/lab `IS_AI_ENABLED` flag is removed from
`FeatureFlagKey`, the public flag catalog, and the dev seeder.
- Drops every backend `@RequireFeatureFlag(IS_AI_ENABLED)` guard (agent,
agent chat, chat subscription, role-to-agent assignment, workflow AI
step creation) and the now-unused `FeatureFlagModule`/`FeatureFlagGuard`
wiring in the AI and workflow modules.
- Removes frontend gating from settings nav, role
permissions/assignment/applicability, command menu hotkeys, side panel,
mobile/drawer nav, and the agent chat provider so AI UI is always on.
Tests and generated GraphQL/SDK schemas updated accordingly.
## Test plan
- [x] `npx nx typecheck twenty-shared`
- [x] `npx nx typecheck twenty-server`
- [x] `npx nx typecheck twenty-front`
- [x] `npx nx lint:diff-with-main twenty-server`
- [x] `npx nx lint:diff-with-main twenty-front`
- [x] `npx jest --config=packages/twenty-server/jest.config.mjs
feature-flag`
- [x] `npx jest --config=packages/twenty-server/jest.config.mjs
workspace-entity-manager`
- [ ] Manual smoke test: AI features still accessible without any flag
row in `featureFlag`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
## Summary
- Remove the \"Apps are currently in alpha\" warning from 8 pages under
`developers/extend/apps/` (getting-started, architecture/building,
data-model, layout, logic-functions, front-components, cli-and-testing,
publishing).
- Keep the warning on the Skills & Agents page only, and reword it to
scope it to that feature: \"Skills and agents are currently in alpha.
The feature works but is still evolving.\"
## Test plan
- [ ] Preview docs build and confirm the warning banner no longer
appears on the 8 pages above.
- [ ] Confirm the warning still renders on the Skills & Agents page with
the updated wording.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
## Summary
- Hosting FAQ: drops the inaccurate "most teams run it on our managed
cloud" claim and presents self-hosting and cloud as equal options.
- Pricing FAQ: replaces the awkward "for teams needing enterprise-grade
security" with "for teams that need finer access control", which more
accurately describes what SSO and row-level permissions do.
## Test plan
- [ ] Visually verify the FAQ section on the website renders the updated
copy.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
## Summary
- **New Getting Started section** with quickstart guide and restructured
navigation
- **Halftone-style illustrations** for User Guide and Developer
introduction cards using a Canvas 2D filter script
- **Removed hero images** (`image:` frontmatter + `<Frame><img>` blocks)
from all user-guide article pages
- **Cleaned up translations** (13 languages): removed hero images and
updated introduction cards to use halftone style
- **Cleaned up twenty-ui pages**: removed outdated hero images from
component docs
- **Deleted orphaned images**: `table.png`, `kanban.png`
- **Developer page**: fixed duplicate icon, switched to 3-column layout
## Test plan
- [ ] Verify docs site builds without errors
- [ ] Check User Guide introduction page renders halftone card images in
both light and dark mode
- [ ] Check Developer introduction page renders 3-column layout with
distinct icons
- [ ] Confirm article pages no longer show hero images at the top
- [ ] Spot-check a few translated pages to ensure hero images are
removed
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: github-actions <github-actions@twenty.com>
Automated daily sync of `ai-providers.json` from
[models.dev](https://models.dev).
This PR updates pricing, context windows, and model availability based
on the latest data.
New models meeting inclusion criteria (tool calling, pricing data,
context limits) are added automatically.
Deprecated models are detected based on cost-efficiency within the same
model family.
**Please review before merging** — verify no critical models were
incorrectly deprecated.
Co-authored-by: FelixMalfait <6399865+FelixMalfait@users.noreply.github.com>
## Summary
- Override the `AppChip` label so the Twenty standard application always
renders as `Standard` and the workspace custom application always
renders as `Custom`, instead of leaking each app's underlying name (e.g.
`Twenty Eng's custom application`).
- Detection mirrors the logic already used in
`useApplicationAvatarColors`, relying on
`TWENTY_STANDARD_APPLICATION_UNIVERSAL_IDENTIFIER` /
`TWENTY_STANDARD_APPLICATION_NAME` and
`currentWorkspace.workspaceCustomApplication.id`.
- The `This app` label for the current application context and the
original `application.name` fallback for any other installed app are
preserved.
## Affected UI
- Settings → Data model → Existing objects (App column).
- Anywhere else `AppChip` / `useApplicationChipData` is used.
## Summary
Application-side preparation so `twenty-website-new` can take over the
canonical `twenty.com` hostname from the legacy `twenty-website`
(Vercel) deployment without breaking SEO or existing inbound links.
### What's added
- **`src/app/robots.ts`** — serves `/robots.txt` and points crawlers
at the new `/sitemap.xml`. Honours `NEXT_PUBLIC_WEBSITE_URL` with a
`https://twenty.com` fallback.
- **`src/app/sitemap.ts`** — serves `/sitemap.xml` listing the
canonical public routes of the new website (home, why-twenty,
product, pricing, partners, releases, customers + each case study,
privacy-policy, terms).
- **`next.config.ts` `redirects()`** — adds:
- The existing `docs.twenty.com` permanent redirects from the legacy
site (`/user-guide`, `/developers`, `/twenty-ui` and their nested
variants).
- 308-redirects for renamed/restructured pages so existing inbound
links and Google results keep working:
| From | To |
|-------------------------------------|-----------------------------|
| `/story` | `/why-twenty` |
| `/legal/privacy` | `/privacy-policy` |
| `/legal/terms` | `/terms` |
| `/legal/dpa` | `/terms` |
| `/case-studies/9-dots-story` | `/customers/9dots` |
| `/case-studies/act-immi-story` | `/customers/act-education` |
| `/case-studies/:slug*` | `/customers` |
| `/implementation-services` | `/partners` |
| `/onboarding-packages` | `/partners` |
### What's intentionally **not** added
Routes that exist on the legacy site but have no equivalent on the
new website are left as honest 404s for now (we can decide on landing
pages later):
- `/jobs`, `/jobs/*`
- `/contributors`, `/contributors/*`
- `/oss-friends`
## Cutover order
1. Merge this PR.
2. Bump the website-new image tag in `twenty-infra-releases`
(`prod-eu`) so the new robots / sitemap / redirects are live on
`https://website-new.twenty.com`.
3. Smoke test on `https://website-new.twenty.com`:
- `curl -sI https://website-new.twenty.com/robots.txt`
- `curl -sI https://website-new.twenty.com/sitemap.xml`
- `curl -sI https://website-new.twenty.com/story` — expect 308 to
`/why-twenty`
- `curl -sI https://website-new.twenty.com/legal/privacy` — expect 308
to `/privacy-policy`
4. Merge the companion `twenty-infra` PR
([twentyhq/twenty-infra#589](https://github.com/twentyhq/twenty-infra/pull/589))
so the ingress accepts `Host: twenty.com` and `Host: www.twenty.com`.
5. Flip the Cloudflare DNS records for `twenty.com` and `www` to the
EKS NLB and purge the Cloudflare cache.
## Summary
- Bump `twenty-sdk` from `1.23.0` to `2.0.0`
- Bump `twenty-client-sdk` from `1.23.0` to `2.0.0`
- Bump `create-twenty-app` from `1.23.0` to `2.0.0`
## Summary
Following the recent move of `defineXXX` exports (e.g.
`defineLogicFunction`, `defineObject`, `defineFrontComponent`, …) from
the `twenty-sdk` root entry to the `twenty-sdk/define` subpath, this PR
aligns the documentation and the marketing site so users see the correct
import paths.
- `packages/twenty-docs/developers/extend/apps/building.mdx`: every code
snippet now imports `defineXXX` and related types/enums (`FieldType`,
`RelationType`, `OnDeleteAction`,
`STANDARD_OBJECT_UNIVERSAL_IDENTIFIERS`, `PermissionFlag`, `ViewKey`,
`NavigationMenuItemType`, `PageLayoutTabLayoutMode`,
`getPublicAssetUrl`, `DatabaseEventPayload`, `RoutePayload`,
`InstallPayload`, …) from `twenty-sdk/define`. Mixed imports were split
so that hooks and host-API helpers (`useRecordId`, `useUserId`,
`useFrontComponentId`, `enqueueSnackbar`, `closeSidePanel`, `pageType`,
`numberOfSelectedRecords`, `objectPermissions`, `everyEquals`,
`isDefined`) come from `twenty-sdk/front-component`.
-
`packages/twenty-website-new/.../DraggableTerminal/TerminalEditor/editorData.ts`:
the 29 demo source strings shown in the homepage's draggable terminal
now import from `twenty-sdk/define`.
Example apps under `packages/twenty-apps/{examples,internal,fixtures}`
were already using the right subpaths, so no code changes were needed
there.
Translations under `packages/twenty-docs/l/` are intentionally left
untouched — they will be refreshed via Crowdin from the English source.
## Test plan
- [ ] Skim the rendered `building.mdx` on Mintlify preview to confirm
code snippets look right.
- [ ] Visual check on the website's draggable terminal demo.
Made with [Cursor](https://cursor.com)
## Summary
We are releasing Twenty v2.0. This PR sets up the
upgrade-version-command machinery for the new release line:
- Move `1.23.0` into `TWENTY_PREVIOUS_VERSIONS` (it just shipped)
- Set `TWENTY_CURRENT_VERSION` to `2.0.0` (no specific upgrade commands
— this is just the major version cut)
- Set `TWENTY_NEXT_VERSIONS` to `['2.1.0']` so future PRs that
previously would have targeted `1.24.0` now target `2.1.0`
- Add empty `V2_0_UpgradeVersionCommandModule` and
`V2_1_UpgradeVersionCommandModule` and wire them into
`WorkspaceCommandProviderModule`
- Refresh the `InstanceCommandGenerationService` snapshots to reflect
the new current version (`2.0.0` / `2-0-` slug)
The `2-0/` directory is intentionally empty — there are no specific
upgrade commands for the v2.0 cut. New upgrade commands authored after
this merges should land in `2-1/` (or be generated against `--version
2.1.0`).
## Test plan
- [x] `npx jest` on the impacted upgrade test files
(`upgrade-sequence-reader`, `upgrade-command-registry`,
`instance-command-generation`) passes (41 tests, 8 snapshots)
- [x] `prettier --check` and `oxlint` clean on touched files
- [ ] Manual: open `nx run twenty-server:command -- upgrade --dry-run`
against a local stack with workspaces still on `1.23.0` and confirm the
sequence is computed without errors
Made with [Cursor](https://cursor.com)
## Summary
- Bump `twenty-sdk` from `1.23.0-canary.9` to `1.23.0`
- Bump `twenty-client-sdk` from `1.23.0-canary.9` to `1.23.0`
- Bump `create-twenty-app` from `1.23.0-canary.9` to `1.23.0`
Made with [Cursor](https://cursor.com)
## Context
ActivityTargetsInlineCell passed editModeContent via
RecordInlineCellContext, but that context key was no longer read.
Fix aligns the code with the rest of the codebase.
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
## Problem
Building `twenty-website-new` in any environment that does **not** also
include `twenty-website` (e.g. the Docker image used by the deployment
workflow) fails with:
```
Error: Turbopack build failed with 99 errors:
Error evaluating Node.js code
Error: Cannot find module 'next/babel'
Require stack:
- /app/node_modules/@babel/core/lib/config/files/plugins.js
- ...
- /app/node_modules/babel-merge/src/index.js
- /app/packages/twenty-website-new/node_modules/@wyw-in-js/transform/lib/plugins/babel-transform.js
- /app/packages/twenty-website-new/node_modules/next-with-linaria/lib/loaders/turbopack-transform-loader.js
```
## Root cause
`packages/twenty-website-new/wyw-in-js.config.cjs` references presets by
bare name:
```js
presets: ['next/babel', '@wyw-in-js'],
```
These options flow through
[`babel-merge`](https://github.com/cellog/babel-merge/blob/master/src/index.js#L11),
which calls `@babel/core`'s `resolvePreset(name)` **without** a
`dirname` argument. With no `dirname`, `@babel/core` falls back to
`require.resolve(id)` from its own file location — so resolution starts
at `node_modules/@babel/core/...` and only walks parent `node_modules`
directories from there, never down into individual workspace packages.
In a normal local install both presets happen to be hoisted to the
workspace root (because `twenty-website` pins `next@^14` and wins the
hoist), so resolution succeeds by accident. In the single-workspace
Docker build only `twenty-website-new` is present, so `next` (16.1.7)
and `@wyw-in-js/babel-preset` are nested in
`packages/twenty-website-new/node_modules` and Babel cannot reach them —
hence the failure.
## Fix
Pre-resolve both presets with `require.resolve(...)` in the wyw-in-js
config so Babel receives absolute paths and resolution becomes
independent of hoisting layout.
## Verification
- `yarn nx build twenty-website-new` — passes locally with the full
workspace
- Reproduced the original failure with a simulated single-workspace
install (only `twenty-website-new` and `twenty-oxlint-rules` present),
confirmed it fails on `main` and passes with this patch
- This unblocks the `twenty-infra` `Deploy Website New` workflow
([related infra PR](https://github.com/twentyhq/twenty-infra/pull/586))
Made with [Cursor](https://cursor.com)
## Summary
- Removes the per-step `canBillMeteredProduct(WORKFLOW_NODE_EXECUTION)`
gate in `WorkflowExecutorWorkspaceService.executeStep` so workflows keep
running when a workspace reaches `hasReachedCurrentPeriodCap`.
Previously every step failed with
`BILLING_WORKFLOW_EXECUTION_ERROR_MESSAGE` (\"No remaining credits to
execute workflow…\").
- Drops the now-unused `BillingService` injection, related imports, and
the helper `canBillWorkflowNodeExecution`. Updates the spec to drop the
corresponding billing-validation case and mock.
- Leaves the constant file and `BillingService` itself in place, plus a
TODO at the previous gate site, so the behavior can be re-enabled with a
small, reviewable revert.
## Notes
- Usage events are still emitted (`USAGE_RECORDED` /
`UsageResourceType.WORKFLOW`), and `EnforceUsageCapJob` keeps computing
the cap and flipping `hasReachedCurrentPeriodCap` — only the executor
stops consulting that flag.
- The runner-level `canFeatureBeUsed` check in
`WorkflowRunnerWorkspaceService.run` was already log-only (subscription
presence, not credits), so no change there.
- AI chat (`agent-chat.resolver.ts`) keeps its own
`BILLING_CREDITS_EXHAUSTED` gate; this PR does not touch it.
## Test plan
- [x] `npx jest workflow-executor.workspace-service.spec.ts` (17/17
pass)
- [ ] Manual: with billing enabled and the metered subscription item
flagged `hasReachedCurrentPeriodCap = true`, trigger a workflow run and
verify steps execute end-to-end instead of failing with the billing
error.
Made with [Cursor](https://cursor.com)
## Summary
Fix the website-new Docker build which currently fails with:
\`\`\`
NX \"production\" is an invalid fileset.
All filesets have to start with either {workspaceRoot} or {projectRoot}.
\`\`\`
\`packages/twenty-website-new/project.json\` declares \`\"inputs\":
[\"production\", \"^production\"]\` — a named input defined in the root
\`nx.json\`. Without copying \`nx.json\` into the image, nx can't
resolve it and the build fails.
Mirrors what the main twenty Dockerfile already does (line 9 of
\`packages/twenty-docker/twenty/Dockerfile\` copies both
\`tsconfig.base.json\` and \`nx.json\`).
## Test plan
- [ ] Re-run twenty-infra's \`Deploy Website New\` workflow (dev) —
build step should now pass
Made with [Cursor](https://cursor.com)
## Summary
- The Data Model table was labeling core Twenty objects (e.g. Person,
Company) as **Managed** even though they are part of the standard
application. This PR teaches the frontend to resolve an `applicationId`
back to its real application name (`Standard`, `Custom`, or any
installed app), and removes the misleading **Managed** label entirely.
- Introduces a single, consistent way to render an "app badge" across
the settings UI:
- new `Avatar` variant `type="app"` (rounded 4px corners + 1px
deterministic border derived from `placeholderColorSeed`)
- new `AppChip` component (icon + name) backed by a new
`useApplicationChipData` hook
- new `useApplicationsByIdMap` hook + `CurrentApplicationContext` so the
chip can render **This app** when shown inside the matching app's detail
page
- Reuses these primitives on:
- the application detail page header (`SettingsApplicationDetailTitle`)
- the Installed / My apps tables (`SettingsApplicationTableRow`)
- the NPM packages list (`SettingsApplicationsDeveloperTab`)
- Backend: exposes a minimal `installedApplications { id name
universalIdentifier }` field on `Workspace` (resolved from the workspace
cache, soft-deleted entries filtered out) so the frontend can resolve
`applicationId` -> name without N+1 fetches.
- Cleanup: deletes `getItemTagInfo` and inlines its tiny
responsibilities into the components that need them, matching the
`RecordChip` pattern.
## Summary
Adds the Docker build for the new marketing website at
`packages/twenty-website-new`, mirroring the existing
`packages/twenty-docker/twenty-website/Dockerfile`.
Differences from the existing `twenty-website` Dockerfile:
- Uses `nx build twenty-website-new` / `nx start twenty-website-new`
- Drops the `KEYSTATIC_*` build-time fake env (the new website doesn't
use Keystatic)
- Doesn't copy `twenty-ui` source (the new website has no workspace
dependency on it)
The image will be built by the new `deploy-website-new.yaml` workflow in
[`twentyhq/twenty-infra`](https://github.com/twentyhq/twenty-infra) and
pushed to ECR repos `dev-website-new` / `staging-website-new`.
Companion PRs:
- twentyhq/twenty-infra: Helm chart + ArgoCD app + deploy workflow
- twentyhq/twenty-infra-releases: bootstrap tags.yaml
## Test plan
- [ ] Local build: \`docker build -f
packages/twenty-docker/twenty-website-new/Dockerfile .\`
- [ ] First run of \`Deploy Website New\` workflow on dev succeeds
(build + push to ECR)
- [ ] ArgoCD \`website-new\` application becomes Healthy on dev
- [ ] https://website-new.twenty-main.com serves the new website
Made with [Cursor](https://cursor.com)
## Summary
MCP tool execution crashed with \`Cannot destructure property
'loadingMessage' of 'parameters' as it is undefined\` whenever
\`execute_tool\` was called without an inner \`arguments\` field. Root
cause: \`loadingMessage\` is an AI-chat UX affordance (lets the LLM
narrate progress so the chat UI can show "Sending email…") but it was
being wrapped into **every** tool schema — including those advertised to
external MCP clients — and \`dispatch\` unconditionally stripped it,
crashing on \`undefined\` args.
The fix scopes the wrap/strip pair to AI-chat callers only:
- Pair wrap and strip inside \`hydrateToolSet\` (they belong together).
- New \`includeLoadingMessage\` option on \`hydrateToolSet\` /
\`getToolsByName\` / \`getToolsByCategories\` (default \`true\` so
AI-chat behavior is unchanged).
- MCP opts out → external clients see clean inputSchemas without a
required \`loadingMessage\` field.
- \`dispatch\` no longer strips; args default to \`{}\` defensively.
- \`execute_tool\` defaults \`arguments\` to \`{}\` at the LLM boundary.
## Test plan
- [x] \`npx nx typecheck twenty-server\` passes
- [x] \`npx oxlint\` clean on changed files
- [x] \`npx jest mcp-protocol mcp-tool-executor\` — 23/23 tests pass
- [ ] Manually: call \`execute_tool\` via MCP with and without inner
\`arguments\` — verify no crash, endpoints execute
- [ ] Manually: inspect MCP \`tools/list\` response — verify
\`search_help_center\` schema no longer contains \`loadingMessage\`
- [ ] Regression: AI chat still streams loading messages as the LLM
calls tools
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
## Context
Standard page layout tabs, page layout widgets, and view field group
titles were hardcoded English in the backend. This PR brings them under
the same translation pipeline as views.
Notes: Once a standard widget/tab/section title is overriden, the
backend returns its value without translation
## Summary
Same fix pattern as #19511 (`rolesPermissions` cartesian product).
The `Settings > Applications` page was hitting query read timeouts in
production. The offending SQL came from
`ApplicationService.findManyApplications` / `findOneApplication`, which
loaded **5 `OneToMany` children** in a single query via TypeORM
`relations`:
```
logicFunctions × agents × frontComponents × objects × applicationVariables
```
Postgres returns the Cartesian product of all five — e.g. 20 logic
functions × 5 agents × 30 front components × 100 objects × 10 variables
= **3M rows for ~165 distinct records**, which trivially exceeds the
read timeout.
## Changes
- **`findManyApplications`** — dropped all `OneToMany` relations. The
frontend `FIND_MANY_APPLICATIONS` query only selects scalar fields and
the `applicationRegistration` ManyToOne, so joining the children was
pure waste at the list level.
- **`findOneApplication`** — kept the cheap `ManyToOne` / `OneToOne`
joins (`packageJsonFile`, `yarnLockFile`, `applicationRegistration`) on
the main query and fetched the 5 `OneToMany` children in parallel via
`Promise.all`, reattaching them on the entity. Same shape as
`WorkspaceRolesPermissionsCacheService.computeForCache` after #19511.
- **`application.module.ts`** — registered the 5 child entity
repositories via `TypeOrmModule.forFeature`.
The other internal caller (`front-component.service.ts →
findOneApplicationOrThrow`) only reads
`application.universalIdentifier`, so the extra parallel single-key
lookups remain far cheaper than the previous 8-way join with row
explosion.
## Summary
Two small visual issues with the shared `CardPicker` (used in the
Enterprise plan modal and the onboarding plan picker):
- Labels like \`Monthly\` / \`Yearly\` were center-aligned inside their
cards while the subtitle (\`\$25 / seat / month\`) stayed left-aligned,
because the underlying \`<button>\` element's default \`text-align:
center\` was leaking into the children.
- The hover background was painted on the same element that owned the
inner padding, so the hover surface didn't visually feel like the whole
card.
This PR:
- Moves the content padding into a new \`StyledCardInner\` so the outer
\`<button>\` is just the card chrome (border + radius + background +
hover).
- Adds \`text-align: left\` so titles align with their subtitles.
- Hoists \`cursor: pointer\` out of \`:hover\` (it should be on by
default for the card).
Affects:
- \`EnterprisePlanModal\` (Settings → Enterprise)
- \`ChooseYourPlanContent\` (onboarding trial picker)
## Summary
- Restructures the why-twenty page into a clearer three-act story (the
shift / what this means / the opportunity), with new copy across hero
subtitle, all editorials, marquee, quote and signoff.
- Adds visual rhythm via left/right section anchoring (sections 1 and 3
left-aligned, section 2 right-aligned) and per-section `GuideCrosshair`
markers at the top edge of each editorial.
- Adds a CTA `Signoff` section ("Get started") at the end of the page.
- Bumps the 3D quotation marks (`Quotes` illustration) so the Quote can
serve as a visual section break.
## Changes
- **Editorial section**
([Editorial.Heading](packages/twenty-website-new/src/sections/Editorial/components/Heading/Heading.tsx),
[Editorial.Body](packages/twenty-website-new/src/sections/Editorial/components/Body/Body.tsx),
[Editorial.Root](packages/twenty-website-new/src/sections/Editorial/components/Root/Root.tsx)):
- Default heading size `xl` → `lg`
- New `two-column-left` and `two-column-right` body layouts via
`data-align` on `TwoColumnGrid`
- New optional `crosshair` prop on `Editorial.Root` that anchors a
`GuideCrosshair` to the section
- **Why-twenty constants** — fresh copy in `hero.ts`, `editorial-one`,
`editorial-three`, `editorial-four`, `marquee.ts`, `quote.ts`,
`signoff.ts`
- **Page layout**
([why-twenty/page.tsx](packages/twenty-website-new/src/app/why-twenty/page.tsx)):
- Section 1 → left content + crosshair on right
- Section 2 → right content + crosshair on left
- Section 3 → left content + crosshair on right
- Adds `Signoff` block with `LinkButton` "Get started" CTA
- **Quote 3D illustration** — `previewDistance` 6 → 4 and bigger
`StyledVisualMount` (added a one-line `oxlint-disable` for the
pre-existing `@ts-nocheck` that the diff surfaced)
- **Signoff** — keeps the GuideCrosshair behavior limited to Partners
(per-page config map remains in place)
Editorial is only consumed on the why-twenty page so the heading/body
changes don't affect any other page.
## Test plan
- [ ] Visit `/why-twenty` on desktop — verify the three editorials read
as left/right/left with crosshairs at section top edges
- [ ] Verify the Signoff CTA renders white "Get started" pill button on
dark background and links to `app.twenty.com/welcome`
- [ ] Verify mobile layout — crosshairs hidden, content left-aligned,
body stacks single column
- [ ] Lighthouse / no console errors
🤖 Generated with [Claude Code](https://claude.com/claude-code)
## Summary
- Bumps `twenty-sdk`, `twenty-client-sdk`, and `create-twenty-app` from
`1.23.0-canary.2` to `1.23.0-canary.9`.
## Test plan
- [ ] Canary publish workflow succeeds for the three packages.
Made with [Cursor](https://cursor.com)
## Summary
Today the SDK lets apps declare `filters` on a view but not `sorts`, so
any view installed via an app manifest can never have a default
ordering. This PR adds declarative view sorts end-to-end: SDK manifest
type, `defineView` validation, CLI scaffold, and the application
install/sync pipeline that converts the manifest into the universal flat
entity used by workspace migrations. The persistence layer
(`ViewSortEntity`, resolvers, action handlers, builders…) already
existed server-side; the missing piece was the manifest → universal-flat
converter and the relation wiring on `view`.
## Changes
**`twenty-shared`**
- Add `ViewSortDirection` enum (`ASC` | `DESC`) and re-export it from
`twenty-shared/types`.
- Add `ViewSortManifest` type and an optional `sorts?:
ViewSortManifest[]` on `ViewManifest`, exported from
`twenty-shared/application`.
**`twenty-sdk`**
- Validate `sorts` entries in `defineView` (`universalIdentifier`,
`fieldMetadataUniversalIdentifier`, `direction` ∈ `ASC`/`DESC`).
- Add a commented `// sorts: [ ... ]` example to the CLI view scaffold
template + matching snapshot assertion.
**`twenty-server`**
- Re-export `ViewSortDirection` from `twenty-shared/types` in
`view-sort/enums/view-sort-direction.ts` (single source of truth,
backward compatible for existing imports).
- New converter `fromViewSortManifestToUniversalFlatViewSort` (+ unit
tests for `ASC` and `DESC`).
- Wire the converter into
`computeApplicationManifestAllUniversalFlatEntityMaps` so
`viewManifest.sorts` are added to `flatViewSortMaps`, mirroring how
filters are processed.
- Replace the `// @ts-expect-error TODO migrate viewSort to v2 /
viewSorts: null` placeholder in `ALL_ONE_TO_MANY_METADATA_RELATIONS`
with the proper relation (`viewSortIds` /
`viewSortUniversalIdentifiers`).
- Update affected snapshots (`get-metadata-related-metadata-names`,
`all-universal-flat-entity-foreign-key-aggregator-properties`).
## Example usage
\`\`\`ts
defineView({
name: 'All issues',
objectUniversalIdentifier: 'issue',
sorts: [
{
universalIdentifier: 'all-issues__sort-created-at',
fieldMetadataUniversalIdentifier: 'createdAt',
direction: 'DESC',
},
],
});
\`\`\`
## Summary
The standalone Apollo client used by `AuthService.renewToken` attached
`loggerLink` unconditionally, while the main `apollo.factory` client
correctly gates it on `isDebugMode`. As a result, **every token refresh
in production printed the `renewToken` response — including the new
access and refresh JWTs — to the browser console** via the `loggerLink`
`RESULT` group.
Reproduced in production: opening devtools shows a
`Twenty-Refresh::Generic` collapsed group on every token renewal,
containing `HEADERS`, `VARIABLES`, `QUERY` and a `RESULT` payload with
the full token strings.
The fix mirrors the gating already used in `apollo.factory.ts`
(`...(isDebugMode ? [logger] : [])`), so the logger is only attached
when `IS_DEBUG_MODE=true`. Local debug behavior is unchanged.
## Summary
- Lazy auth-flow routes (`SignInUp`, `Invite`, `ResetPassword`,
`CreateWorkspace`, `CreateProfile`, `SyncEmails`, `InviteTeam`,
`PlanRequired`, `PlanRequiredSuccess`, `BookCallDecision`, `BookCall`)
render through `<Outlet/>` inside `<AuthModal>`. Their `LazyRoute`
`<Suspense>` fallback was the page-level `PageContentSkeletonLoader`, so
the two grey shimmer bars painted **inside the modal box** for a few
hundred ms while each chunk downloaded.
- `LazyRoute` now accepts an optional `fallback` prop (default
unchanged: the existing page skeleton). Every auth-modal route passes
`fallback={null}` so the modal stays empty until the lazy chunk resolves
instead of flashing the shimmer.
- `AuthModal`'s inner `StyledContent` gets a `min-height: 320px` so the
framer-motion `layout` animation doesn't rapidly resize the modal as
inner steps (loader → form → password → 2FA / workspace selection) swap.
The modal can still grow for taller steps; only the rapid jump is
removed.
## Why default-parameter syntax for `fallback`
`fallback ?? <LazyRouteFallback/>` would treat an explicit `null` as "no
value" and still render the default skeleton. Using a default parameter
(`fallback = <LazyRouteFallback/>`) preserves an explicit `null` because
defaults only kick in for `undefined`.
## Summary
Claude.ai's custom remote MCP connector fails with "Couldn't reach the
MCP server" after successfully completing OAuth dynamic client
registration. Driving the flow through Chrome DevTools showed Claude's
backend creates our DCR client (many hundreds of orphan rows visible in
the admin panel), then never returns the user to `/authorize` — it gives
up silently.
**Empirical comparison against known-working MCP servers Claude.ai
connects to identified one concrete difference**: every server that
works returns `registration_client_uri` in the DCR response. We didn't.
| Server | DCR `registration_client_uri` | Claude.ai web connector |
|---|---|---|
| Linear (`mcp.linear.app`) | `/register/<client_id>` | ✅ works |
| Sentry (`mcp.sentry.dev`) | `/oauth/register/<client_id>` | ✅ works |
| Atlassian (`mcp.atlassian.com`) | yes | ✅ works |
| **Twenty** (before this PR) | **missing** | ❌ "Couldn't reach" |
## What this PR changes
### 1. Add `registration_client_uri` to the DCR response
```
{
"client_id": "…",
…existing fields…,
+ "registration_client_uri": "<issuer>/oauth/register/<client_id>"
}
```
Pointer at the registration's management endpoint per RFC 7591 §3.2.1.
Marked OPTIONAL in the spec but empirically required by Claude.ai.
### 2. New `GET /oauth/register/:clientId` endpoint (RFC 7592 read-back)
Returns public registration metadata (`client_name`, `redirect_uris`,
`grant_types`, `scope`, etc.). 404 for unknown clients.
No `registration_access_token` is issued (and none required to hit this
endpoint): the `client_id` is an unguessable UUID and the fields
returned are already public-readable via
`findApplicationRegistrationByClientId` GraphQL. This matches Linear's
behaviour — they return a `registration_client_uri` but issue no access
token.
### 3. Advertise `response_modes_supported: ["query"]` in AS metadata
RFC 8414 default, but explicitly listed by Linear / Sentry / Atlassian
and absent from ours. Some clients treat its absence as a capability
gap.
## Why I'm confident this is the root cause
- The failure mode exactly matches an orphaned-DCR retry loop (hundreds
of registrations, none `installed` on a workspace).
- #19847 reporter confirmed Claude Desktop + VS Code work — those
clients use the MCP Python SDK which doesn't require
`registration_client_uri`. **Claude.ai web** uses Anthropic's
proprietary backend client (`User-Agent: Claude-User`), which
empirically does.
- All 3 working reference servers return the field; we were the odd one
out.
## Test plan
- [x] `tsc --noEmit` clean on touched files
- [x] `yarn jest
--testPathPatterns="oauth-discovery.controller|mcp-auth.guard"` → 4/4
pass
- [ ] After deploy:
```bash
curl -s -X POST https://<host>/oauth/register -H 'Content-Type:
application/json' \
-d
'{"client_name":"probe","redirect_uris":["https://claude.ai/api/mcp/auth_callback"],"token_endpoint_auth_method":"none"}'
\
| jq .registration_client_uri
# expect: "https://<host>/oauth/register/<uuid>"
```
- [ ] After deploy: add the MCP connector in Claude.ai — user should now
reach the Twenty `/authorize` page
## Honesty
This is the nth fix in a long debugging chain. Unlike the earlier round
of fixes (which were real spec-compliance bugs but not Claude's
blocker), this one is backed by empirical evidence across 3
known-working implementations. If Claude.ai still fails after this
deploys, the remaining delta is `cli_client_id` in AS metadata
(non-standard field, could confuse strict parsers) or a field we
advertise that others don't (e.g. `client_credentials` grant) — both
small, removable, not disruptive.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
- Refresh the Twenty website with updated homepage, product, pricing,
partner, customer, case study, and release content
- Add and replace supporting imagery, illustrations, and Lottie assets
used across the site
- Adjust layout constants, navigation/footer content, and page-level
copy for the updated marketing experience
- Update Next.js config and ignore rules to support the new assets and
build output
## Testing
- Not run (not requested)
---------
Co-authored-by: Abdullah <125115953+mabdullahabaid@users.noreply.github.com>
Automated daily sync of `ai-providers.json` from
[models.dev](https://models.dev).
This PR updates pricing, context windows, and model availability based
on the latest data.
New models meeting inclusion criteria (tool calling, pricing data,
context limits) are added automatically.
Deprecated models are detected based on cost-efficiency within the same
model family.
**Please review before merging** — verify no critical models were
incorrectly deprecated.
Co-authored-by: FelixMalfait <6399865+FelixMalfait@users.noreply.github.com>
## Summary
Splits admin-panel resolvers off the shared `/metadata` GraphQL endpoint
onto a dedicated `/admin-panel` endpoint. The backend plumbing mirrors
the existing `metadata` / `core` pattern (new scope, decorator, module,
factory), and admin types now live in their own
`generated-admin/graphql.ts` on the frontend — dropping 877 lines of
admin noise from `generated-metadata`.
## Why
- **Smaller attack surface on `/metadata`** — every authenticated user
hits that endpoint; admin ops don't belong there.
- **Independent complexity limits and monitoring** per endpoint.
- **Cleaner module boundaries** — admin is a cross-cutting concern that
doesn't match the "shared-schema configuration" meaning of `/metadata`.
- **Deploy / blast-radius isolation** — a broken admin query can't
affect `/metadata`.
Runtime behavior, auth, and authorization are unchanged — this is a
relocation, not a re-permissioning. All existing guards
(`WorkspaceAuthGuard`, `UserAuthGuard`,
`SettingsPermissionGuard(SECURITY)` at class level; `AdminPanelGuard` /
`ServerLevelImpersonateGuard` at method level) remain on
`AdminPanelResolver`.
## What changed
### Backend
- `@AdminResolver()` decorator with scope `'admin'`, naming parallels
`CoreResolver` / `MetadataResolver`.
- `AdminPanelGraphQLApiModule` + `adminPanelModuleFactory` registered at
`/admin-panel`, same Yoga hook set as the metadata factory (Sentry
tracing, error handler, introspection-disabling in prod, complexity
validation).
- Middleware chain on `/admin-panel` is identical to `/metadata`.
- `@nestjs/graphql` patch extended: `resolverSchemaScope?: 'core' |
'metadata' | 'admin'`.
- `AdminPanelResolver` class decorator swapped from
`@MetadataResolver()` to `@AdminResolver()` — no other changes.
### Frontend
- `codegen-admin.cjs` → `src/generated-admin/graphql.ts` (982 lines).
- `codegen-metadata.cjs` excludes admin paths; metadata file shrinks by
877 lines.
- `ApolloAdminProvider` / `useApolloAdminClient` follow the existing
`ApolloCoreProvider` / `useApolloCoreClient` pattern, wired inside
`AppRouterProviders` alongside the core provider.
- 37 admin consumer files migrated: imports switched to
`~/generated-admin/graphql` and `client: useApolloAdminClient()` is
passed to `useQuery` / `useMutation`.
- Three files intentionally kept on `generated-metadata` because they
consume non-admin Documents: `useHandleImpersonate.ts`,
`SettingsAdminApplicationRegistrationDangerZone.tsx`,
`SettingsAdminApplicationRegistrationGeneralToggles.tsx`.
### CI
- `ci-server.yaml` runs all three `graphql:generate` configurations and
diff-checks all three generated dirs.
## Authorization (unchanged, but audited while reviewing)
Every one of the 38 methods on `AdminPanelResolver` has a method-level
guard:
- `AdminPanelGuard` (32 methods) — requires `canAccessFullAdminPanel ===
true`
- `ServerLevelImpersonateGuard` (6 methods: user/workspace lookup + chat
thread views) — requires `canImpersonate === true`
On top of the class-level guards above. No resolver method is accessible
without these flags + `SECURITY` permission in the workspace.
## Test plan
- [ ] Dev server boots; `/graphql`, `/metadata`, `/admin-panel` all
mapped as separate GraphQL routes (confirmed locally during
development).
- [ ] `nx typecheck twenty-server` passes.
- [ ] `nx typecheck twenty-front` passes.
- [ ] `nx lint:diff-with-main twenty-server` and `twenty-front` both
clean.
- [ ] Manual smoke test: log in with a user who has
`canAccessFullAdminPanel=true`, open the admin panel at
`/settings/admin-panel`, verify each tab loads (General, Health, Config
variables, AI, Apps, Workspace details, User details, chat threads).
- [ ] Manual smoke test: log in with a user who has
`canImpersonate=false` and `canAccessFullAdminPanel=false`, hit
`/admin-panel` directly with a raw GraphQL request, confirm permission
error on every operation.
- [ ] Production deploy note: reverse proxy / ingress must route the new
`/admin-panel` path to the Nest server. If the proxy has an explicit
allowlist, infra change required before cutover.
## Follow-ups (out of scope here)
- Consider cutting over the three
`SettingsAdminApplicationRegistration*` components to admin-scope
versions of the app-registration operations so the admin page is fully
on the admin endpoint.
- The `renderGraphiQL` double-assignment in
`admin-panel.module-factory.ts` is copied from
`metadata.module-factory.ts` — worth cleaning up in both.
## Summary
The "AI" acronym was rendered inconsistently across the codebase. The
backend AI module had settled on PascalCase `Ai` (`AiAgentModule`,
`AiBillingService`, `AiChatModule`, `AiModelRegistryService`, etc.),
while frontend components, several DTOs, a few types, and shared
identifiers still used all-caps `AI` (`AIChatTab`,
`AISystemPromptPreviewDTO`, `SettingsPath.AIPrompts`, ...). CLAUDE.md
specifies PascalCase for classes; this PR normalizes everything internal
to `Ai`.
**This is a pure internal rename.** The GraphQL schema is untouched —
`@ObjectType` decorator string arguments, resolver method names (which
become Query/Mutation field names), gql template contents, and the
`generated-metadata/graphql.ts` file are preserved verbatim. The only
visible change is TypeScript identifiers and file names.
## Also folded in (adjacent cleanups)
- **`AgentModelConfigService` → `AiModelConfigService`**. Lives in
`ai-models/` and is used by multiple AI code paths, not just the Agent
entity. The "Agent" prefix was misleading.
- **`generate-text-input.dto.ts` → `generate-text.input.ts`**. The
`ai-agent/dtos/` folder already uses `<entity>.input.ts` convention for
Input classes (`create-agent.input.ts` etc.); the old path mixed
`.dto.ts` file extension with a class that has no DTO suffix. File
rename only; class stays `GenerateTextInput`.
- **Removed stale TODO** in `ai-model-config.type.ts` that asked for the
`AiModelConfig` rename that this PR performs.
## Rename methodology
Bulk rename via perl with anchored regex
`(?<!['"])(?<![A-Z.])AI([A-Z])(?=[a-z])/Ai$1/g`:
- **Lookbehind for non-uppercase** skips adjacent acronyms (`MOSAIC`,
`OIDCSSO`) and leaves `AIRBNB_ID` alone.
- **Lookbehind for non-quote** protects most string literals.
- **Lookahead for lowercase** restricts matches to PascalCase
identifiers (`AIChatTab`), leaving SCREAMING_SNAKE constants untouched.
Strict file-scope exclusions: `generated-metadata/**`, `generated/**`,
`locales/**`, `migrations/**`, `illustrations/**`, `halftone/**`, and
the two gql template files (`queries/getAISystemPromptPreview.ts`,
`mutations/uploadAIChatFile.ts`).
Post-rename reverts for identifiers where the regex was too eager:
- Backend resolver method names kept: `getAISystemPromptPreview`,
`uploadAIChatFile` (they are GraphQL field names).
- `@ObjectType('AdminAIModels')` / `('AISystemPromptPreview')` /
`('AISystemPromptSection')` kept as-is.
- Backend classes `ClientAIModelConfig` / `AdminAIModelConfig` kept
as-is (they use `@ObjectType()` with no argument, so the class name IS
the schema name).
- External-library symbols restored: `OpenAIProvider`,
`createOpenAICompatible`, `vercelAIIntegration`.
File renames use a two-step rename to work on macOS case-insensitive
filesystems: `git mv X.tsx X.tsx.tmp && git mv X.tsx.tmp renamed.tsx`.
## Diff audit
- 0 changes to migrations
- 0 changes to locale `.po` / `.ts` files
- 0 changes to `generated-metadata/graphql.ts`
- 0 changes to website illustration files (base64 blobs preserved)
- 0 renames inside user-facing translation strings (`t\`…\``,
`msg\`…\``, `<Trans>…</Trans>`)
## Test plan
- [x] `npx nx typecheck twenty-server` — PASS
- [x] `npx nx typecheck twenty-front` — PASS
- [x] `npx jest ai-model admin agent-role` — 79/79 PASS
- [x] `npx oxlint --type-aware` on 118 changed files — 0 errors
- [x] `npx prettier --check` on 118 changed files — clean
- [ ] CI
## Summary
Fixes the pre-existing camelCase typo mentioned in #19839.
The injected `SSOService` property was named `sSOService` instead of the
correct camelCase `ssoService` across the auth module. This is a
straightforward mechanical rename of the property/variable name — no
logic changes.
> **Bonus: pre-existing typo to fix** — `private sSOService: SSOService`
— The variable name is a camelCase typo (`sSOService` instead of
`ssoService`). — #19839
## Changes
Renamed `sSOService` → `ssoService` in 7 files:
- `auth/guards/oidc-auth.guard.ts`
- `auth/guards/saml-auth.guard.ts`
- `auth/guards/oidc-auth.spec.ts`
- `auth/auth.resolver.ts`
- `auth/controllers/sso-auth.controller.ts`
- `auth/strategies/saml.auth.strategy.ts`
- `sso/sso.resolver.ts`
Note: The type `SSOService` (PascalCase class name) is intentionally
left unchanged — it will be addressed in the broader SSO acronym PR from
#19839.
## Test plan
- [ ] Verify `typecheck twenty-server` passes
- [ ] Verify existing auth/SSO tests pass
Co-authored-by: Abhay <abhayjnayakpro@gmail.com>
## The bug
Claude's MCP connector fails with \"Couldn't reach the MCP server\" on
every URL (\`api.twenty.com/mcp\`, \`app.twenty.com/mcp\`,
\`<workspace>.twenty.com/mcp\`, custom domains). The failure happens
**before** any OAuth flow starts — the client never even reaches the
consent screen.
## Root cause
\`POST /mcp\` unauthenticated returns:
\`\`\`
HTTP/2 401
access-control-allow-origin: *
www-authenticate: Bearer
resource_metadata=\"https://…/.well-known/oauth-protected-resource\"
content-type: application/json; charset=utf-8
(no access-control-expose-headers)
\`\`\`
The [Fetch/CORS
spec](https://fetch.spec.whatwg.org/#cors-safelisted-response-header-name)
defines only six response headers as safelisted — \`Cache-Control\`,
\`Content-Language\`, \`Content-Type\`, \`Expires\`, \`Last-Modified\`,
\`Pragma\`. Every other header is withheld from cross-origin JS unless
the server opts it in via \`Access-Control-Expose-Headers\`.
Result: Claude's browser-side MCP client receives the 401 but
\`response.headers.get('WWW-Authenticate')\` returns \`null\`. No
\`resource_metadata\` URL, no discovery, no OAuth — the client gives up
with the generic \"can't reach server\" error.
The [MCP authorization
spec](https://modelcontextprotocol.io/specification/draft/basic/authorization)
explicitly requires this header to be exposed.
## Fix
One config change in \`main.ts\`:
\`\`\`ts
- cors: true,
+ // Expose WWW-Authenticate so browser-based MCP clients can read the
+ // resource_metadata pointer on 401. Required by MCP authorization
spec.
+ cors: { exposedHeaders: ['WWW-Authenticate'] },
\`\`\`
NestJS's default \`cors: true\` uses the \`cors\` package defaults,
which don't set \`exposedHeaders\`. Moving to an explicit config keeps
all other defaults (origin \`*\`, standard methods) and adds the single
required expose.
## Why it's safe and generally beneficial
- \`Access-Control-Expose-Headers: WWW-Authenticate\` is sent on every
response but only has an effect when \`WWW-Authenticate\` is actually
present (i.e. 401s). It's an opt-in permission, not a header-setter.
- \`WWW-Authenticate\` itself is still only set by \`McpAuthGuard\` on
401 — this PR doesn't change where or when the header is emitted.
- Covers the entire app, not just \`/mcp\` — any future 401-returning
endpoint will behave correctly for browser clients automatically.
- No change to origin handling, methods, or credentials. All existing
API / GraphQL / REST traffic is unaffected.
## Verification
After deploy:
\`\`\`bash
curl -sI -X POST -H \"Origin: https://claude.ai\"
https://api.twenty.com/mcp \\
| grep -iE 'access-control-expose|www-authenticate'
# Expect:
# access-control-expose-headers: WWW-Authenticate
# www-authenticate: Bearer resource_metadata=\"…\"
\`\`\`
Then re-try adding the MCP connector in Claude — if this was the only
blocker, OAuth should now complete.
## Related
- #19755, #19766, #19824 — prior fixes in the MCP/OAuth discovery chain
(host-aware metadata, path-aware well-known, \`TRUST_PROXY\` for
\`request.protocol\`). This PR completes the CORS side of that work.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
Three small spec-compliance fixes called out in an audit against the
[MCP authorization spec
(draft)](https://modelcontextprotocol.io/specification/draft/basic/authorization)
and RFC 9728 / RFC 9207.
### 1. Split Protected Resource Metadata by path (RFC 9728 §3.2)
> The `resource` value returned MUST be identical to the protected
resource's resource identifier value into which the well-known URI path
suffix was inserted.
Today a single handler serves both
\`/.well-known/oauth-protected-resource\` and
\`/.well-known/oauth-protected-resource/mcp\` and returns \`resource:
<origin>/mcp\` from both. That's wrong for the root form — per RFC 9728
the root URL corresponds to the **origin as resource**, and only the
\`/mcp\`-suffixed URL corresponds to \`<origin>/mcp\`.
After this PR:
| Request | `resource` field |
|---|---|
| `GET /.well-known/oauth-protected-resource` | `https://<host>` |
| `GET /.well-known/oauth-protected-resource/mcp` | `https://<host>/mcp`
|
Both still return the same `authorization_servers`, `scopes_supported`,
and `bearer_methods_supported`.
Claude's current flow happens to work because our WWW-Authenticate
points at the root form and Claude compares `resource` against what it
connected to. Strict clients probing the path-aware URL first were
rejecting us.
### 2. Advertise `authorization_response_iss_parameter_supported: true`
(RFC 9207)
Defense against OAuth mix-up attacks. Required by the [OAuth 2.1
security
BCP](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1).
Signals that clients receiving an authorization response will find the
issuer in the `iss` parameter and can validate it.
### 3. Fix `WWW-Authenticate` challenge: point at path-aware PRM URL,
add `scope` param
- Was: `Bearer
resource_metadata=\"https://<host>/.well-known/oauth-protected-resource\"`
- Now: `Bearer
resource_metadata=\"https://<host>/.well-known/oauth-protected-resource/mcp\",
scope=\"api profile\"`
After change (1), only the path-aware URL returns a PRM document whose
`resource` matches what the MCP client connected to (\`<host>/mcp\`).
Pointing clients at the right URL keeps discovery consistent.
The `scope` parameter is a SHOULD in RFC 6750 and lets clients ask for
least-privilege scopes on first authorization.
## Not in this PR (queued separately)
From the same audit:
- **Audit JWT `aud` (audience) validation** — the spec requires the
server to reject tokens whose audience doesn't match this resource. Need
a read-only code review to confirm; filing as a follow-up.
- **Audit PKCE enforcement** — we advertise
`code_challenge_methods_supported: [\"S256\"]`; need to confirm the
\`/authorize\` flow actually rejects requests missing `code_challenge`.
- **403 `insufficient_scope` challenge format** for step-up auth.
- **CIMD (Client ID Metadata Documents)** support — newer spec
alternative to DCR.
## Test plan
- [x] \`yarn jest
--testPathPatterns=\"mcp-auth.guard|oauth-discovery.controller\"\` → 4/4
passing
- [x] \`tsc --noEmit\` clean on touched files
- [ ] After deploy:
\`\`\`bash
curl -s https://<host>/.well-known/oauth-protected-resource | jq
.resource
# expect: \"https://<host>\"
curl -s https://<host>/.well-known/oauth-protected-resource/mcp | jq
.resource
# expect: \"https://<host>/mcp\"
curl -sI -X POST https://<host>/mcp | grep -i www-authenticate
# expect: Bearer resource_metadata=\"…/oauth-protected-resource/mcp\",
scope=\"api profile\"
\`\`\`
## Related
- #19836 — CORS exposes `WWW-Authenticate` + `MCP-Protocol-Version` so
browser clients can read them. Pairs with this PR.
- #19755 / #19766 / #19824 — the earlier chain that got host-aware
discovery and \`TRUST_PROXY\` working.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
OAuth 2.1 and the MCP authorization spec mandate PKCE (S256) for public
clients — clients registered with \`token_endpoint_auth_method=none\`
(no client secret). We advertise \`code_challenge_methods_supported:
[\"S256\"]\` in \`/.well-known/oauth-authorization-server\` but our
\`/authorize\` flow accepted requests from public clients without
\`code_challenge\`.
## Why this was a soft failure today
\`oauth.service.ts:178\` already rejects token exchange when a client
presents neither \`client_secret\` nor \`code_verifier\`:
\`\`\`ts
if (!clientSecret && !storedCodeChallenge) {
return this.errorResponse('invalid_request', 'Either client_secret or
code_verifier (PKCE) is required');
}
\`\`\`
So a public client attempting to bypass PKCE would **eventually** fail —
but only after:
1. Getting a valid authorization code issued at \`/authorize\`
2. Round-tripping the user through consent
3. Trying to exchange the code at \`/token\` and finally getting
rejected
That's a wasted user interaction and a fuzzy spec boundary. This PR
rejects at \`/authorize\` instead, matching the spec's \"MUST require
PKCE for public clients\" expectation.
## Fix
Single check in \`AuthService.generateAuthorizationCode\`:
\`\`\`ts
const isPublicClient = !applicationRegistration.oAuthClientSecretHash;
if (isPublicClient && !codeChallenge) {
throw new AuthException(
\`code_challenge is required for public clients (PKCE S256, per OAuth
2.1)\`,
AuthExceptionCode.FORBIDDEN_EXCEPTION,
);
}
\`\`\`
### Why \`!oAuthClientSecretHash\` is the right \"public\" predicate
- Dynamic registration (\`POST /oauth/register\`) hardcodes
\`oAuthClientSecretHash: null\` and rejects any
\`token_endpoint_auth_method != \"none\"\`
(oauth-registration.controller.ts:120-130).
- Confidential clients registered via the workspace settings UI have a
non-null bcrypt hash.
- The same field is already used as the public/confidential gate in
\`validateClient\` and \`validateClientSecret\`.
## Scope
- ✅ Dynamic-registration clients (Claude, other MCP connectors) — MUST
now supply code_challenge. They already do; no behavior change for
conformant clients.
- ✅ The seeded twenty-cli registration — public client, already uses
PKCE. No change.
- ➖ Confidential clients (workspace-admin-registered OAuth apps with a
client_secret) — unaffected, they authenticate at the token endpoint.
## Related
- #19836 — CORS exposes \`WWW-Authenticate\` / \`MCP-Protocol-Version\`
- #19838 — RFC 9728 PRM split + RFC 9207 iss param + \`scope\` in
WWW-Authenticate challenge
## Test plan
- [x] \`tsc --noEmit\` clean on modified file (pre-existing
\`twenty-shared\` dist errors unrelated)
- [ ] Integration-level smoke test after deploy:
\`\`\`bash
# Register a dynamic client (public)
CLIENT_ID=$(curl -s -X POST -H 'Content-Type: application/json' \\
-d
'{\"client_name\":\"pkce-test\",\"redirect_uris\":[\"http://localhost/cb\"]}'
\\
https://<host>/oauth/register | jq -r .client_id)
# Without code_challenge → should now 4xx at /authorize (cannot easily
test outside the React UI,
# but the GraphQL authorizeApp mutation will throw AuthException)
\`\`\`
- [ ] Claude MCP connector still completes OAuth end-to-end (it always
sends code_challenge, so no-op)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
- The exception class under `ai-agent/` was serving every AI surface
(agent, chat, role, models, generate-text), so `Agent` was a misnomer.
Promoted to the `ai/` namespace; renamed `AgentException` →
`AiException`, `AgentExceptionCode` → `AiExceptionCode`, and related
interceptor / filter / handler / file names accordingly.
- Split the single `AGENT_NOT_FOUND` code into entity-specific codes.
Chat-thread lookups no longer reuse the agent identifier.
- **Fixes Sentry 500s on `GetChatMessages` / `chatThread`.** Every
"Thread not found" and "Queued message not found" throw site in ai-chat
was previously wired to `AGENT_EXECUTION_FAILED`, which maps to
`InternalServerError` (HTTP 500). They now use `THREAD_NOT_FOUND` /
`MESSAGE_NOT_FOUND`, both of which map to `NotFoundError` (HTTP 404) in
the GraphQL and REST handlers.
The underlying cause of *why* clients are asking for threads that no
longer resolve for them — per-user chat-thread create events being
broadcast workspace-wide — is addressed separately in a follow-up PR.
### Code map
- Added: `ai/ai.exception.ts`,
`ai/utils/ai-graphql-api-exception-handler.util.ts` (+ spec with new
THREAD/MESSAGE cases),
`ai/interceptors/ai-graphql-api-exception.interceptor.ts`,
`ai/filters/ai-api-exception.filter.ts`
- Deleted: `ai/ai-agent/agent.exception.ts`,
`ai/ai-agent/utils/agent-graphql-api-exception-handler.util.ts` (+
spec),
`ai/ai-agent/interceptors/agent-graphql-api-exception.interceptor.ts`,
`ai/ai-agent/filters/agent-api-exception.filter.ts`
- Updated: 21 call sites across ai-agent, ai-agent-execution,
ai-agent-role, ai-chat, ai-generate-text, ai-models, role, and
workspace-migration validators.
## Test plan
- [x] `npx nx typecheck twenty-server`
- [x] `npx jest ai-graphql-api-exception-handler` (3/3 including new
THREAD_NOT_FOUND and MESSAGE_NOT_FOUND cases)
- [x] `npx jest agent-role.service` (9/9)
- [x] `npx oxlint --type-aware` on all changed files (0 warnings/errors)
- [x] `npx prettier --check` on all changed files
- [ ] CI
## Summary
- Bumps `twenty-sdk`, `twenty-client-sdk`, and `create-twenty-app` from
`1.22.0` to `1.23.0-canary.1`.
## Test plan
- [ ] CI green
Made with [Cursor](https://cursor.com)
## Summary
Logic-function bundles produced by the twenty-sdk CLI were ~1.18 MB even
for a one-line handler. Root cause: the SDK shipped as a single bundled
barrel (`twenty-sdk` → `dist/index.mjs`) that co-mingled server-side
definition factories with the front-component runtime, validation (zod),
and React. With no `\"sideEffects\"` declaration on the SDK package,
esbuild had to assume every module-level statement could have side
effects and refused to drop unused code.
This PR restructures the SDK so consumers' bundlers can tree-shake at
the leaf level:
- **Reorganized SDK source.** All server-side definition factories now
live under `src/sdk/define/` (agents, application, fields,
logic-functions, objects, page-layouts, roles, skills, views,
navigation-menu-items, etc.). All front-component runtime
(components, hooks, host APIs, command primitives) lives under
`src/sdk/front-component/`. The legacy bare `src/sdk/index.ts` is
removed; the bare `twenty-sdk` entry no longer exists.
- **Split the build configs by purpose / runtime env.** Replaced
`vite.config.sdk.ts` with two purpose-specific configs:
- `vite.config.define.ts` — node target, externals from package
`dependencies`, emits to `dist/define/**`
- `vite.config.front-component.ts` — browser/React target, emits to
`dist/front-component/**`
Both use `preserveModules: true` so each leaf ships as its own `.mjs`.
- **\`\"sideEffects\": false\`** on `twenty-sdk` so esbuild can drop
unreferenced re-exports.
- **\`package.json\` exports + \`typesVersions\`** updated: dropped the
bare \`.\` entry, added \`./front-component\`, and pointed \`./define\`
at the new per-module dist layout.
- **Migrated every internal/example/community app** to the new subpath
imports (`twenty-sdk/define`, `twenty-sdk/front-component`,
`twenty-sdk/ui`).
- **Added \`bundle-investigation\` internal app** that reproduces the
bundle bloat and demonstrates the fix.
- Cleaned up dead \`twenty-sdk/dist/sdk/...\` references in the
front-component story builder, the call-recording app, and the SDK
tsconfig.
## Bundle size impact
Measured with esbuild using the same options as the SDK CLI
(\`packages/twenty-apps/internal/bundle-investigation\`):
| Variant | Imports | Before | After |
| ----------------------- |
------------------------------------------------------- | ---------- |
--------- |
| \`01-bare\` | \`defineLogicFunction\` from \`twenty-sdk/define\` |
1177 KB | **1.6 KB** |
| \`02-with-sdk-client\` | + \`CoreApiClient\` from
\`twenty-client-sdk/core\` | 1177 KB | **1.9 KB** |
| \`03-fetch-issues\` | + GitHub GraphQL fetch + JWT signing + 2
mutations | 1181 KB | **5.8 KB** |
| \`05-via-define-subpath\` | same as \`01\`, via the public subpath |
1177 KB | **1.7 KB** |
That's a ~735× reduction on the bare baseline. Knock-on benefits for
Lambda warm + cold starts, S3 upload size, and \`/tmp\` disk usage in
warm containers.
## Test plan
- [x] \`npx nx run twenty-sdk:build\` succeeds
- [x] \`npx nx run twenty-sdk:typecheck\` passes
- [x] \`npx nx run twenty-sdk:test:unit\` passes (31 files / 257 tests)
- [x] \`npx nx run-many -t typecheck
--projects=twenty-front,twenty-server,twenty-front-component-renderer,twenty-sdk,twenty-shared,bundle-investigation\`
passes
- [x] \`node
packages/twenty-apps/internal/bundle-investigation/scripts/build-variants.mjs\`
produces the sizes above
- [ ] CI green
Made with [Cursor](https://cursor.com)
## Summary
Fixes cross-user cache contamination for AI chat threads in multi-user
workspaces.
`agentChatThread` is the only user-scoped (`userWorkspaceId`-filtered)
entry in `METADATA_NAME_TO_ENTITY_KEY`, but its create/update events
were going through `WorkspaceEventBroadcaster`, which fans out to every
active SSE stream in the workspace. Every client therefore received
other users' threads into their local `agentChatThreads` metadata store
(which is persisted to localStorage). On a subsequent session,
`AgentChatThreadInitializationEffect` would pick the
most-recently-updated thread — potentially another user's — and fire
`GetChatMessages` against it; the server's `userWorkspaceId` filter
didn't match, producing "Thread not found" errors (now 404 thanks to
twentyhq/twenty#19831).
### The fix mirrors the RLS pattern already used for object records
`ObjectRecordEventPublisher` already reads
`streamData.authContext.userWorkspaceId` to filter per subscriber.
Metadata events had no equivalent. This PR closes that gap with a
minimal, opt-in change:
- Add optional `recipientUserWorkspaceIds?: string[]` to
`WorkspaceBroadcastEvent`. Omit → workspace-wide (unchanged for views,
objects, fields, etc.). Set → delivered only to streams whose
`authContext.userWorkspaceId` is in the list.
- `WorkspaceEventBroadcaster.broadcast` builds the payload per stream
and skips events whose recipient doesn't match.
- Both `agentChatThread` broadcast call sites in `AgentChatService` now
pass `recipientUserWorkspaceIds: [userWorkspaceId]`.
### Scope
3 files, +40/-11 LOC:
-
`subscriptions/workspace-event-broadcaster/types/workspace-broadcast-event.type.ts`
-
`subscriptions/workspace-event-broadcaster/workspace-event-broadcaster.service.ts`
- `metadata-modules/ai/ai-chat/services/agent-chat.service.ts`
### Stale cache note
Existing clients still carry poisoned localStorage from before this
lands. They self-heal on sign-out/in because
`clearAllSessionLocalStorageKeys` already drops `agentChatThreads`. If
we want to actively flush on deploy, a frontend cache-version bump can
follow in a separate PR.
### Related
- twentyhq/twenty#19831 turns the resulting "Thread not found" from 500
to 404. This PR addresses the root cause; #19831 stops the Sentry noise.
## Test plan
- [x] `npx nx typecheck twenty-server`
- [x] `npx oxlint --type-aware` on all 3 files (0 warnings/errors)
- [x] `npx prettier --check` on all 3 files
- [ ] Manual: two users in the same workspace, user A creates/sends a
chat thread — confirm user B's session does not receive the event and
their `chatThreads` sidebar is unaffected
- [ ] CI
## Summary
Lambda warm-invocations of logic functions were spending **~440 ms**
re-parsing and re-evaluating the user bundle on every call. The executor
wrote the user code to a **randomly-named** temp file and `import()`-ed
it, so each warm call resolved to a new URL and Node's ESM cache could
never reuse the previous module record.
This PR makes the executor write to a **content-hash filename**, skip
the write when the file already exists, and stop deleting it. Identical
code now reuses the same module record across warm calls in the same
container, dropping warm-invocation overhead by **~30–40%**.
## What changed
- `executor/index.mjs`: temp filename derived from `sha256(code)`, write
skipped when file exists, no `fs.rm` on cleanup.
- `lambda.driver.ts`: single structured `[lambda-timing]` log per
invocation with `totalMs / buildExecutorMs / getBuiltCodeMs /
payloadBytes / invokeSendMs / reportDurationMs / billedMs /
initDurationMs / coldStart`. Goes through the standard NestJS `Logger`.
No behavioural change for callers: same input → same output, same error
semantics.
### Caveat: module-scope state now persists across warm calls
With a stable filename, the user bundle is evaluated **once per warm
container**. Any module-scoped state or top-level side-effects in user
code are now shared across invocations of the same container, instead of
re-running on every call. This is documented in the executor and is the
intended trade-off — module scope should be treated as a per-container
cache, not as per-call isolation.
## Findings — measured impact
Same logic function (`fetch-prs`, ~12k PRs to page through), same
workspace, same Lambda config (eu-west-3, 512 MB), token cache primed.
### Warm invocations
| Phase | Before fix | After fix | Δ |
| -------------------------------------- | ------------- |
-------------- | ------------ |
| Executor `import(userBundle)` | ~440 ms | **~0 ms** | **-440 ms** |
| Lambda billed duration | ~1.5–1.7 s | **~1.0–1.1 s** | **~30–40%** |
| Server-perceived round-trip | ~1.7–2.0 s | **~1.0–1.2 s** |
**~30–40%** |
### Cold starts
Unchanged — the cache helps subsequent warm calls in the same container,
not the first one. Init Duration stays ~130–170 ms; total cold call
~2.5–3.0 s.
### Stress
Could not reproduce the previously-reported \"every ~10th call times
out\" behaviour after the fix:
- 30 sequential calls: max 1.7 s, median ~1.1 s, 0 timeouts
- 50 concurrent calls: max 9.4 s (clear cold-start cluster), median ~1.5
s, 0 timeouts
Hypothesis: the warm-import overhead was eating into the headroom
against the function timeout under bursty load; removing it pushed
everything well below the limit.
## Observability
One structured log line per invocation, sent through the standard NestJS
logger:
\`\`\`
[lambda-timing] fnId=abc123 totalMs=1187 buildExecutorMs=2
getBuiltCodeMs=3 payloadBytes=1466321 invokeSendMs=1180
reportDurationMs=992 billedMs=1000 initDurationMs=n/a coldStart=false
\`\`\`
\`coldStart=true\` whenever Lambda spun up a fresh container; on warm
calls \`buildExecutorMs\` and \`getBuiltCodeMs\` collapse to
single-digit ms, confirming the cache fix is working.
## Test plan
- [ ] CI green.
- [ ] Deploy to a Lambda-backed env, trigger a logic function several
times in a row.
- [ ] Confirm \`[lambda-timing]\` warm invocations show \`totalMs\`
~30–40% lower than before, and \`coldStart=false\` after the first call
in a container.
- [ ] Push a new version of an app; confirm the next call shows higher
\`buildExecutorMs\` (new hash, new file written) followed by warm calls
again.
- [ ] Smoke test: errors thrown by the user handler are still surfaced
correctly.
Made with [Cursor](https://cursor.com)
Automated daily sync of `ai-providers.json` from
[models.dev](https://models.dev).
This PR updates pricing, context windows, and model availability based
on the latest data.
New models meeting inclusion criteria (tool calling, pricing data,
context limits) are added automatically.
Deprecated models are detected based on cost-efficiency within the same
model family.
**Please review before merging** — verify no critical models were
incorrectly deprecated.
Co-authored-by: FelixMalfait <6399865+FelixMalfait@users.noreply.github.com>
## The bug
Pasting `https://<workspace>.twenty.com/mcp` into an MCP client (Claude
connector, etc.) fails discovery. Curl shows why:
```bash
$ curl -si https://twentyfortwenty.twenty.com/.well-known/oauth-protected-resource
HTTP/2 200
...
{
\"resource\": \"http://{workspace}.twenty.com/mcp\",
\"authorization_servers\": [\"http://twentyfortwenty.twenty.com\"],
...
}
```
The response advertises `http://` even though the request came in on
`https://`. RFC 9728 / RFC 8707 require the client to validate that the
advertised `resource` matches the URL it connected to, so strict MCP
clients reject the mismatch and OAuth never starts.
## Why request.protocol returns \"http\"
Per [Express docs](https://expressjs.com/en/guide/behind-proxies.html),
`request.protocol` returns the socket-level protocol unless
`app.set('trust proxy', ...)` is configured. In our deployment:
```
client -- https --> Cloudflare -- https --> ingress-nginx -- http --> NestJS pod
```
TLS is terminated at the edge. The upstream TCP connection into the pod
is plain HTTP, and nginx sets `X-Forwarded-Proto: https` for the pod to
read. Without a `trust proxy` setting, Express ignores
`X-Forwarded-Proto` and `request.protocol === 'http'`.
`main.ts` currently has no `app.set('trust proxy', ...)` call anywhere.
## Why this only surfaced now
`grep -rn request.protocol` finds three pre-existing call sites —
`RestApiMetadataService`, `OpenApiService`, `RouteTriggerService`. All
three wrap it in `getServerUrl({ serverUrlEnv: SERVER_URL,
serverUrlFallback: \`${request.protocol}://${request.get('host')}\` })`,
which returns `SERVER_URL` whenever it's non-empty. In production
`SERVER_URL` is always set (e.g. \`api.twenty.com\`), so the
\`request.protocol\` branch is effectively dead code there.
#19755 introduced the first call site that uses `request.protocol`
unconditionally — the OAuth discovery controller has to echo the request
host, because the whole point is supporting multiple paste-able origins
(workspace subdomains, custom domains, etc.). That's why this is the
first \"wrong protocol\" bug anyone has seen in our app.
## The fix
One line in `main.ts`:
```ts
app.set('trust proxy', twentyConfigService.get('TRUST_PROXY'));
```
Backed by a new `TRUST_PROXY` env var with a default. `request.protocol`
then honors `X-Forwarded-Proto`, `request.ip` honors `X-Forwarded-For`,
etc. OAuth discovery URLs come out on the right scheme, and any future
`request.protocol` callers Just Work.
## Why this needs to be configurable (not hardcoded)
Twenty is open-source and deployed in at least three distinct
topologies:
1. **Kubernetes with ingress** (us, enterprise self-hosters) — TLS
terminated upstream, needs `trust proxy` **on**.
2. **Self-host behind a user-supplied reverse proxy** (Caddy, Traefik,
nginx — our [recommended
setup](https://twenty.com/developers/section/self-hosting)) — same as
above, needs `trust proxy` **on**.
3. **Self-host with NestJS exposed directly to the internet** — no
upstream proxy, needs `trust proxy` **off** (otherwise any curl with
`X-Forwarded-For: 1.2.3.4` spoofs `request.ip`, poisoning rate-limiters
and audit logs).
There is no single static value that's correct for all three. Express
makes this a setting for exactly this reason — we follow suit.
## Why the default is `'loopback, linklocal, uniquelocal'`
Shorthand for loopback (127/8, ::1), link-local (169.254/16, fe80::/10),
and unique-local (10/8, 172.16/12, 192.168/16, fc00::/7). In practical
terms: **trust peers coming from private networks; don't trust the
public internet**.
This default is correct for shapes 1 and 2 (cloud, proxied self-host)
because the ingress/proxy peer is always a private-network IP in every
sane deployment.
For shape 3 (directly exposed), the default is still safe because public
clients have public IPs, which are not in any of those ranges — so
`X-Forwarded-For` from an attacker on the internet is ignored. The only
way to be bitten is the exotic case where a public client reaches NestJS
through a private-network hop that isn't a proxy (e.g. a NAT appliance
that forwards to the pod on a private IP and blindly appends headers).
Narrow attack surface, and an operator running that kind of setup is
expected to configure `TRUST_PROXY=false` explicitly.
\"Safer than the naïve `true`, more useful than `false`\" — this matches
what Rails, Django, and many other frameworks recommend for
Kubernetes-style deployments.
## Why an env var instead of hardcoded
- Rejecting hardcoded `true`: would expose shape-3 self-hosters to IP
spoofing without a way to opt out.
- Rejecting hardcoded `false`: would leave cloud + shape-2 self-hosters
broken, same bug as today.
- Accepting string-typed env (not boolean): Express's `trust proxy`
accepts booleans, hop counts (`1`, `2`), IP ranges (`'10.0.0.0/8'`), and
named CIDRs (`'loopback'`). A boolean would hide that flexibility;
operators occasionally need the richer values. The string maps 1:1 onto
what Express accepts.
## Deployment matrix
| Deployment | Default works? | Override needed? |
|---|---|---|
| Cloud (us, K8s + nginx ingress + Cloudflare) | ✓ | — |
| Self-host behind reverse proxy (recommended) | ✓ | — |
| Self-host exposed directly on public IP | ✓ (public IPs not in private
ranges) | Optional: `TRUST_PROXY=false` for strictness |
| Local dev (direct, no proxy) | ✓ (no `X-Forwarded-*` headers arrive) |
— |
| Exotic: multi-hop through non-sanitizing private-network middlebox |
Risky | `TRUST_PROXY=false` |
## Related
- Blocks MCP connector OAuth on `<ws>.twenty.com` / custom domains.
After deploy: `curl -s
https://<ws>.twenty.com/.well-known/oauth-protected-resource | jq
.resource` should return `https://...` (not `http://...`).
- Fixes latent issue in `RestApiMetadataService`, `OpenApiService`,
`RouteTriggerService` fallback paths (pre-existing but dead in
production because `SERVER_URL` is always set — no behavior change
there).
## Test plan
- [x] `tsc --noEmit` clean
- [ ] After deploy: `curl -s
https://twentyfortwenty.twenty.com/.well-known/oauth-protected-resource`
returns `https://` URLs
- [ ] After deploy: MCP connector in Claude successfully completes OAuth
against `https://<ws>.twenty.com/mcp`
- [ ] No change in `request.ip` logging behavior on cloud (nginx-ingress
peer is already private-network, was already being trusted implicitly by
every framework layer that wasn't `request.protocol`)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
The 1.23 backfill command (`upgrade:1-23:backfill-record-page-layouts`)
creates standard page layout widgets from `STANDARD_PAGE_LAYOUTS`. Some
widgets reference field metadatas via `universalConfiguration` (e.g. the
`opportunity.owner` FIELD widget pointing at universal identifier
`20202020-be7e-4d1e-8e19-3d5c7c4b9f2a`).
If a workspace's matching field metadata does not exist or has a
different universal identifier (e.g. older workspaces created before
standard universal identifiers were backfilled), the runner throws
```
Field metadata not found for universal identifier: 20202020-be7e-4d1e-8e19-3d5c7c4b9f2a
```
and the entire migration for that workspace aborts. This was the
underlying cause behind the `Migration action 'create' for
'pageLayoutWidget' failed` error surfaced by #19823.
## Summary
The executor lambda for user logic functions is created in
`LambdaDriver` without a `MemorySize` parameter, so AWS Lambda falls
back to its 128 MB default. That cap is too tight for non-trivial logic
functions — large upstream GraphQL responses, JSON parsing of paginated
batches, and chained Twenty Core API mutations push the process over the
limit and trigger an OOM SIGKILL surfaced to the user as:
```
Runtime exited with error: signal: killed
```
This bumps the executor lambda memory to **512 MB** (matching the
existing `BUILDER_LAMBDA_MEMORY_MB`). The change is applied on both:
- The `CreateFunctionCommand` path used when a logic function is first
deployed.
- The `UpdateFunctionConfigurationCommand` path used when the deps/SDK
layer wiring is refreshed — so existing functions get reconfigured on
next deploy without any additional manual action.
## Why 512 MB
Lambda compute is allocated proportionally to memory. 512 MB:
- Matches the builder lambda already in this file
(`BUILDER_LAMBDA_MEMORY_MB`).
- Is comfortably above the 128 MB default that the existing OOMs are
hitting.
- Stays well below the higher tiers, keeping the per-invocation cost
increase modest.
Made with [Cursor](https://cursor.com)
## Summary
When a workspace migration action fails during workspace iteration (e.g.
during upgrade commands), only the wrapper message was logged:
```
[WorkspaceIteratorService] Error in workspace 7914ba64-...: Migration action 'create' for 'pageLayoutWidget' failed
```
The underlying error (transpilation/metadata/workspace schema) and its
stack were swallowed, making production debugging painful.
This PR adds a follow-up log entry for each inner error attached to a
`WorkspaceMigrationRunnerException`, including its message and stack
trace. The runner exception itself is untouched — it already exposes
structured `errors` (`actionTranspilation`, `metadata`,
`workspaceSchema`).
After this change, logs look like:
```
[WorkspaceIteratorService] Error in workspace 7914ba64-...: Migration action 'create' for 'pageLayoutWidget' failed
[WorkspaceIteratorService] Caused by actionTranspilation in workspace 7914ba64-...: <real reason>
at ...
```
## Test plan
- [ ] Trigger a failing workspace migration (e.g. backfill record page
layouts) on a workspace and confirm the underlying cause + stack now
appear in logs.
Made with [Cursor](https://cursor.com)
## Summary
The `--light` flag of `workspace:seed:dev` was supposed to seed a single
workspace for thin dev containers, but it was only filtering the rich
workspaces (Apple, YCombinator) — the `Empty3`/`Empty4` fixtures
introduced in #19559 for upgrade-sequence integration tests were always
seeded.
So `--light` actually produced **3** workspaces:
- Apple
- Empty3
- Empty4
In single-workspace mode (`IS_MULTIWORKSPACE_ENABLED=false`, the default
for the `twenty-app-dev` container),
[`WorkspaceDomainsService.getDefaultWorkspace`](https://github.com/twentyhq/twenty/blob/main/packages/twenty-server/src/engine/core-modules/domain/workspace-domains/services/workspace-domains.service.ts)
returns the most recently created workspace — Empty4 — which has no
users. The prefilled `tim@apple.dev` therefore cannot sign in, which
breaks flows that depend on the default workspace such as `yarn twenty
remote add --local`'s OAuth handshake against the dev container.
This PR makes `--light` actually skip the empty fixtures so the dev
container ends up with a single workspace (Apple). The default (no flag)
invocation, used by `database:reset` for integration tests, still seeds
all four workspaces, so
`upgrade-sequence-runner-integration-test.util.ts` keeps working
unchanged.
Fixed using Opus 4.7, I wanted to test this model out and in this repo I
know you guys care about quality, pls let me know if this is good code.
It looks good to me
Fixes#19740.
## Summary
PostgreSQL UNIQUE indexes treat two `''` values as duplicates but two
`NULL`s as distinct. `validateAndInferPhoneInput` was persisting blank
`primaryPhoneNumber` as `''` instead of `NULL`, so a second record with
an empty unique phone failed with a constraint violation. The sibling
composite transforms (`transformEmailsValue`, `removeEmptyLinks`,
`transformTextField`) already canonicalize null-equivalent values;
phones was the outlier.
- Empty-string phone sub-fields now normalize to `null`. `undefined` is
preserved so partial updates leave columns the user did not touch alone.
- `PhonesFieldGraphQLInput` drops the aspirational `CountryCode` brand
on input. GraphQL delivers raw strings at the boundary; branding happens
during validation.
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
Adds the ability to change the icon of a record page layout tab from the
side panel in tab edit mode, and sets a default icon for newly-created
record page tabs (no default for dashboards).
<img width="916" height="312" alt="Screenshot 2026-04-17 at 19 55 51"
src="https://github.com/user-attachments/assets/d9f57e89-d40d-483e-b508-5d7318df1ef5"
/>
## Context
Adds a burger-menu dropdown on the layout customization bar exposing a
"Reset record page layout" action, so users can reset a record page
layout straight from the edit bar (previously only available in object
settings).
<img width="1512" height="849" alt="Screenshot 2026-04-17 at 18 03 19"
src="https://github.com/user-attachments/assets/145a77b8-6234-4987-ae31-38eccaa0548d"
/>
## Context
"Reset to default" action is rejected by the backend for custom entities
because there is no "default" concept for them
## Implementation
Grey out the Reset to default action on record page-layout tabs and
widgets when the entity either has no applicationId yet (unsaved draft —
previously slipped through the existing check), or belongs to the
workspace custom application.
<img width="926" height="370" alt="Screenshot 2026-04-17 at 18 50 31"
src="https://github.com/user-attachments/assets/c7c163f4-17a6-4b69-a66d-90f9085d27a2"
/>
## Twenty for Twenty: Resend module
Introduces `packages/twenty-apps/internal/twenty-for-twenty`, the
official internal Twenty app, with a first module integrating
[Resend](https://resend.com).
### Breakdown
**Resend module** (`src/modules/resend/`)
- Two app variables: `RESEND_API_KEY` and `RESEND_WEBHOOK_SECRET`.
- **Objects**: `resendContact`, `resendSegment`, `resendTemplate`,
`resendBroadcast`, `resendEmail`, with relations between them and to
standard `person`.
- **Inbound sync (Resend → Twenty)**:
- Cron-driven logic function `sync-resend-data` (every 5 min) pulling
all entities through paginated, rate-limit-aware utilities
(`sync-contacts`, `sync-segments`, `sync-templates`, `sync-broadcasts`,
`sync-emails`).
- Webhook endpoint (`resend-webhook`) verifying signatures and handling
`contact.*` and `email.*` events in real time.
- `find-or-create-person` auto-links Resend contacts to Twenty people by
email.
- **Outbound sync (Twenty → Resend)**: DB-event logic functions for
`contact.created/updated/deleted` and `segment.created/deleted`, with a
`lastSyncedFromResend` field for loop prevention.
- **UI**: views, page layouts, navigation menu items, and front
components (`HtmlPreview`, `RecordHtmlViewer`) to preview email/template
HTML in record pages; `sync-resend-data` command exposed as a front
component.
### Setup
See the new README for install steps, webhook configuration, and local
testing with the Resend CLI.
## Context
On custom objects, clicking "+ New Tab" on a record page layout never
exposed deactivated tabs for reactivation, even though isActive: false
tabs were correctly returned by the API. Standard objects worked fine.
## Fix
isReactivatableTab gated reactivation on tab.applicationId ===
objectMetadata.applicationId. For custom objects these two ids are
intentionally different.
This check was unnecessary after all, we simply want to check if a tab
is inactive (only non-custom entities can be de-activated) 👍
<img width="694" height="551" alt="Screenshot 2026-04-17 at 18 14 28"
src="https://github.com/user-attachments/assets/42485cb2-8be5-4a55-a311-479ed3226908"
/>
## Summary
- Added image-mode SVG export and clipboard copy support for the
halftone generator.
- Reworked the export panel UX into separate `Download` and `Copy`
sections with format-only buttons.
- Simplified the SVG output to reduce redundant segments while
preserving the rendered result.
- Updated related halftone canvas, state, exporter, and illustration
code to support the new flow.
## Testing
- `yarn nx typecheck twenty-website-new`
- `yarn nx build twenty-website-new`
Previously this blocked users who only had SMTP configured to send
outbound emails, this fixes it by making messageChannel and persist
layer conditional
## Summary
- Relocates `AddTableWidgetViewTypeFastInstanceCommand` from `1-22/` to
`1-23/` and bumps its `@RegisteredInstanceCommand` version from `1.22.0`
to `1.23.0`. The original timestamp `1775752190522` is preserved so the
command slots chronologically into the existing 1.23 sequence;
auto-discovered via `@RegisteredInstanceCommand`, no module wiring
change needed.
- Same pattern as #19792 (move
`pageLayoutWidget.conditionalAvailabilityExpression` to 1.23).
## Context
Picking an aggregate option (Count, Sum, Percentage Not Empty, etc.) in
a dashboard Table widget footer did nothing visually — the value never
appeared or updated
## Fix
RecordTableWidget was missing RecordIndexTableContainerEffect, which
reactively syncs currentView.viewFields[].aggregateOperation from the
Apollo cache into the viewFieldAggregateOperationState jotai atom that
the footer reads
<img width="612" height="261" alt="Screenshot 2026-04-17 at 14 17 47"
src="https://github.com/user-attachments/assets/b4409b0e-82a6-4614-bc09-653be738134a"
/>
# Introduction
Even though this would not possible through API at the moment, from
neither API metadata or manifest ( as manifest `permissionsFlag`
declarations etc are done from within a declared role )
Prevent any app to create permissions entities over another app role
from the validation engine itself
## `isEditable`
We might wanna deprecate this column at some point from the entity it
self as now the grain would rather be `what app owns that role ?`
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
## Summary
- Replaces the standalone TypeORM migration
`1775654781000-addConditionalAvailabilityExpressionToPageLayoutWidget.ts`
with a registered fast instance command under
`packages/twenty-server/src/database/commands/upgrade-version-command/1-23/`,
so the `pageLayoutWidget.conditionalAvailabilityExpression` column is
created through the unified upgrade pipeline.
- Uses `ADD COLUMN IF NOT EXISTS` / `DROP COLUMN IF EXISTS` so the new
instance command is a safe no-op for environments that already applied
the previous TypeORM migration.
- Keeps the original timestamp `1775654781000` so the command slots
chronologically into the existing 1.23 sequence; auto-discovered via
`@RegisteredInstanceCommand`, no module wiring needed.
## Context
Reported error when creating a new workspace on `main`:
> column PageLayoutWidgetEntity.conditionalAvailabilityExpression does
not exist
Aligns this column addition with the rest of the 1.23 schema changes
that already use the instance-command pattern.
## Summary
- Add `WorkspaceMigrationGraphqlApiExceptionInterceptor` to
`MarketplaceResolver` and `ApplicationInstallResolver` so validation
failures during app install return `METADATA_VALIDATION_FAILED` with
structured `extensions.errors` instead of generic
`INTERNAL_SERVER_ERROR`
- Update SDK `installTarballApp()` to pass the full GraphQL error object
(including extensions) through the install flow
- Add `formatInstallValidationErrors` utility to format structured
validation errors for CLI output
- Add integration test verifying structured error responses for invalid
navigation menu items and view fields
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Summary
Aligns **workspace member** editing and **onboarding** with how the
product is actually used: profile and other “settings” fields go through
**`updateWorkspaceMemberSettings`**, while **`/graphql`** record APIs
follow **object-level** permissions for the `workspaceMember` object.
## Product behaviour
### Completing “Create profile” onboarding
Users who must create a profile (empty name at sign-up) get
`ONBOARDING_CREATE_PROFILE_PENDING` set. The onboarding UI saves the
name with **`updateWorkspaceMemberSettings`**, not with a workspace
record **`updateOne`**.
**Before:** The server only cleared the pending flag on
**`workspaceMember.updateOne`**, so the flag could stay set and
onboarding appeared stuck.
**After:** Clearing the profile step runs when
**`updateWorkspaceMemberSettings`** persists an update that includes a
**name** (same rules as before: non-empty name parts). Onboarding can
advance normally after **Continue** on Create profile.
### Two ways to change workspace member data
| Path | Typical use | Who can change what |
|------|----------------|---------------------|
| **`updateWorkspaceMemberSettings`** (metadata API) | Standard member
fields the app treats as “my profile / preferences” (name,
avatar-related settings, locale, time zone, etc.) | **Always** your
**own** workspace member. Changing **another** member still requires
**Workspace members** in role settings (`WORKSPACE_MEMBERS`). Custom
fields are **not** allowed on this endpoint (unchanged). |
| **`/graphql`** record mutations on **`workspaceMember`** | Custom
fields, integrations, anything that goes through the generic record API
| **`WorkspaceMember`** is special-cased in permissions: **read** stays
**on** for everyone, but **update / create / delete** require
**`WORKSPACE_MEMBERS`**, including updating **your own** row via
`/graphql`. So a **Member** without that permission cannot fix their
name through **`updateWorkspaceMember`**; they use **Settings** /
**`updateWorkspaceMemberSettings`** instead. |
This matches **`WorkspaceRolesPermissionsCacheService`**: for the
workspace member object, `canReadObjectRecords` is always true;
`canUpdateObjectRecords` (and delete-related flags) follow
**`WORKSPACE_MEMBERS`**.
### Hooks and delete side-effects
- Removed **`workspaceMember.updateOne`** pre-query hook and
**`WorkspaceMemberPreQueryHookService`**: they duplicated the same rules
the permission cache already enforces for `/graphql`.
- **`WorkspaceMember.deleteOne`** pre-hook still tells users to remove
members via the dedicated flow; the post-hook only runs the
**`deleteUserWorkspace`** side-effect when a member row is actually
removed—**no** extra settings-permission check there, since only callers
that already passed **object** delete permission can remove the row.
## Tests
- **`workspace-members.integration-spec.ts`**: clarifies and extends
coverage so **`/graphql`** **`updateOne`** is denied for **own** record
on a **standard** name field and on a **custom** field when the role
lacks **`WORKSPACE_MEMBERS`**.
## Implementation notes
- **`OnboardingService.completeOnboardingProfileStepIfNameProvided`**
centralises the “clear profile pending if name present” logic;
**`UserResolver.updateWorkspaceMemberSettings`** calls it after save,
using the typed update payload’s **`name`** (no cast).
- **`UserWorkspaceService.updateUserWorkspaceLocaleForUserWorkspace`**:
drops a redundant **`coreEntityCacheService.invalidate`**;
**`updateWorkspaceMemberSettings`** still invalidates the user-workspace
cache after the mutation.
Automated daily sync of `ai-providers.json` from
[models.dev](https://models.dev).
This PR updates pricing, context windows, and model availability based
on the latest data.
New models meeting inclusion criteria (tool calling, pricing data,
context limits) are added automatically.
Deprecated models are detected based on cost-efficiency within the same
model family.
**Please review before merging** — verify no critical models were
incorrectly deprecated.
Co-authored-by: FelixMalfait <6399865+FelixMalfait@users.noreply.github.com>
# Introduction
Gracefully validating that when creating an entity its
`universalIdentifier` is available within the all application metadata
maps context ( current app + twenty standard, currently the only managed
dependencies )
## Summary
Adding `https://api.twenty.com/mcp` as an MCP server in Claude fails
with `Couldn't reach the MCP server` before OAuth can start. Two
independent bugs cause this:
1. **Missing path-aware well-known route.** The latest MCP spec
instructs clients to probe `/.well-known/oauth-protected-resource/mcp`
before `/.well-known/oauth-protected-resource`. Only the root path was
registered, so the path-aware request fell through to
`ServeStaticModule` and returned the SPA's `index.html` with HTTP 200.
Strict clients (Claude.ai) tried to parse it as JSON and gave up. Fixed
by registering both paths on the same handler.
2. **Stale protocol version.** Server advertised `2024-11-05`, which
predates Streamable HTTP. We've implemented Streamable HTTP (SSE
response format was added in #19528), so bumped to `2025-06-18`.
Reproduction before the fix:
```
$ curl -s -o /dev/null -w "%{http_code} %{content_type}\n" https://api.twenty.com/.well-known/oauth-protected-resource/mcp
200 text/html; charset=UTF-8
```
After the fix this returns `application/json` with the RFC 9728 metadata
document.
Note: this is separate from #19755 (host-aware resource URL for
multi-host deployments).
## Test plan
- [x] `npx jest oauth-discovery.controller` — 2/2 tests pass, including
one asserting both routes are registered
- [x] `npx nx lint:diff-with-main twenty-server` passes
- [ ] After deploy, `curl
https://api.twenty.com/.well-known/oauth-protected-resource/mcp` returns
JSON (not HTML)
- [ ] Adding `https://api.twenty.com/mcp` in Claude reaches the OAuth
authorization screen
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
- Replaced the `deep-equal` npm package with the existing
`fastDeepEqual` from `twenty-shared/utils` across 5 files in the server
and shared packages
- `deep-equal` was causing severe CPU overhead in the record update hot
path (`executeMany` → `formatTwentyOrmEventToDatabaseBatchEvent` →
`objectRecordChangedValues` → `deepEqual`, called **per field per
record**)
- `fastDeepEqual` is ~100x faster for plain JSON database records since
it skips unnecessary prototype chain inspection and edge-case handling
- Removed the now-unnecessary `LARGE_JSON_FIELDS` branching in
`objectRecordChangedValues` since all fields now use the fast
implementation
## Summary
Follow-up to #19755. Simplifies `OAuthDiscoveryController` by dropping
the `authorization_endpoint → frontend base URL` branch that was there
to make `api.twenty.com/mcp` paste-able in MCP clients.
We've decided not to support pasting `api.twenty.com/mcp` — users can
paste `app.twenty.com/mcp`, `<workspace>.twenty.com/mcp`, or a custom
domain, all of which serve both frontend and API. On those hosts,
`authorization_endpoint` was already pointed at the same host as
`issuer`, which is what we want.
## Change
- Remove `isApiHost` helper and the `authorizeBase` branch — use
`issuer` for `authorization_endpoint`.
- Drop now-unused `TwentyConfigService` and `DomainServerConfigService`
injections.
- Drop duplicate `DomainServerConfigModule` import from
`application-oauth.module.ts` (the module is no longer needed).
Net diff: +1 / -22 across 2 files.
## Breaking change
MCP clients configured with `https://api.twenty.com/mcp` will stop
working. They should be reconfigured with the host matching the
workspace they're connecting to (`<workspace>.twenty.com/mcp`,
`app.twenty.com/mcp`, or a custom domain).
## Test plan
- [x] `yarn jest --testPathPatterns="mcp-auth.guard"` → 2/2 passing
(unchanged)
- [x] `tsc --noEmit` clean on modified files
- [ ] Manual verification on staging: `app.twenty.com/mcp` and
`<workspace>.twenty.com/mcp` OAuth flow still works end-to-end
Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
OAuth discovery metadata (RFC 9728 protected-resource, RFC 8414
authorization-server) and the MCP `WWW-Authenticate` header were
hardcoded to `SERVER_URL`. This breaks MCP clients that paste any URL
other than `api.twenty.com/mcp` — the metadata declares `resource:
https://api.twenty.com/mcp`, which doesn't match the URL the client
connected to, so the client rejects it and the OAuth flow never starts.
Reproduced with Claude's MCP integration: pasting
`<workspace>.twenty.com/mcp`, `app.twenty.com/mcp`, or a custom domain
returned *"Couldn't reach the MCP server"* because discovery returned a
resource URL for a different host.
Related memory: MCP clients POST to the URL the user entered, not the
discovered resource URL — so every paste-able hostname has to advertise
`resource` for that same hostname.
## What the server now does
`WorkspaceDomainsService.getValidatedRequestBaseUrl(req)` resolves the
canonical base URL for the host the request came in on, validated
against the set of hosts we actually serve:
- `SERVER_URL` (e.g. `api.twenty.com`) — API host
- default base URL (e.g. `app.twenty.com`) — the `DEFAULT_SUBDOMAIN`
base
- `FRONTEND_URL` bare host
- any `<workspace>.twenty.com` subdomain (DB lookup)
- any workspace `customDomain` where `isCustomDomainEnabled = true`
- any registered `publicDomain`
An unrecognized / spoofed Host falls back to
`DomainServerConfigService.getBaseUrl()`. **We never reflect arbitrary
Host values into the response.**
Callers updated:
- `OAuthDiscoveryController.getProtectedResourceMetadata` — echoes the
validated host into `resource` and `authorization_servers`.
- `OAuthDiscoveryController.getAuthorizationServerMetadata` — uses the
validated host for `issuer` and `*_endpoint`, **except**
`authorization_endpoint`: when the request came in via `SERVER_URL`
(API-only, no `/authorize` route), we keep that one pointed at the
default frontend base URL.
- `McpAuthGuard` — sets `WWW-Authenticate: Bearer
resource_metadata=\"<validatedBase>/.well-known/oauth-protected-resource\"`
on 401s, so the MCP client's follow-up discovery fetch lands on the same
host it started on.
## Security
- Workspace identity is already bound to the JWT via per-workspace
signing secrets (`jwtWrapperService.generateAppSecret(tokenType,
workspaceId)`). Host-aware discovery does not weaken that.
- Custom domains are only accepted once `isCustomDomainEnabled = true`
(i.e. after DNS verification), so an attacker can't register a
custom-domain mapping on a workspace and have discovery reflect it
before it's been proven.
- Unknown / spoofed Hosts fall through to the default base URL.
## Drive-by
Fixed a duplicate `DomainServerConfigModule` import in
`application-oauth.module.ts` while adding `WorkspaceDomainsModule`.
## Companion infra change required for custom domains
Customer custom domains (`crm.acme.com/mcp`) also require an
ingress-level fix to exclude `/mcp`, `/oauth`, and `/.well-known` from
the `/s\$uri` rewrite applied when `X-Twenty-Public-Domain: true`.
Shipping that in a twenty-infra PR (will cross-link here).
## Test plan
- [x] 14 new tests in
`WorkspaceDomainsService.getValidatedRequestBaseUrl` covering: missing
Host, SERVER_URL, base URL, FRONTEND_URL, workspace subdomain, unknown
subdomain fallback, enabled custom domain, disabled custom domain,
public domain, completely unrecognized host, lowercase coercion,
malformed Host, single-workspace mode fallback, DB throwing → fallback
- [x] New `oauth-discovery.controller.spec.ts` covering both endpoints
across api / app / workspace-subdomain / custom-domain hosts, plus
`cli_client_id` propagation
- [x] Rewrote `mcp-auth.guard.spec.ts` to cover `WWW-Authenticate` for
all four host types (api, workspace subdomain, custom domain, spoofed
fallback)
- [x] `yarn jest
--testPathPatterns=\"workspace-domains.service|oauth-discovery.controller|mcp-auth.guard\"`
→ 41/41 passing
- [x] `tsc --noEmit` clean on all modified files
- [ ] Manual verification against staging: connect Claude to
`api.twenty.com/mcp`, `app.twenty.com/mcp`,
`<workspace>.twenty.com/mcp`, and a custom domain and confirm OAuth flow
completes on each
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
- Add a `pageLayoutId` foreign key to `CommandMenuItem`, allowing
command menu items to be scoped to a specific page layout instead of
being globally available
- Filter command menu items by the current page layout on the frontend.
Items with a `pageLayoutId` only appear when viewing that layout, while
items without one remain globally visible
- Create an effect to track the current page layout ID
- Include a seed example: a "Show Notification" command pinned to the
Star history standalone page layout
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
## Summary
- Move record fetching and payload building from the enrichment hook
into `TriggerWorkflowVersionEngineCommand`, following the same
component-based pattern as `DeleteRecordsCommand` and
`RestoreRecordsCommand`
- The enrichment hook now only stores workflow metadata (`trigger`,
`availabilityType`, `availabilityObjectMetadataId`); the component uses
`useLazyFetchAllRecords` for exclusion mode (Select All) with full
pagination
- `buildTriggerWorkflowVersionPayloads` is now a pure function accepting
`selectedRecords: ObjectRecord[]` instead of reading from the Jotai
store
Fixes the issue introduced by #19718 which blocked Select All with a
warning toast instead of implementing it.
## Test plan
- [ ] Select individual records → run workflow trigger from command menu
→ works as before
- [ ] Click Select All → run workflow trigger from command menu →
fetches all matching records and runs the workflow
- [ ] Select All with some records deselected → correctly excludes those
records
- [ ] Global workflows (no object context) → run without payload as
before
- [ ] Bulk record triggers → payload wraps records in `{namePlural:
[records]}`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
- `SettingsApplicationCustomTab` renders `FrontComponentRenderer` which
calls `useFrontComponentExecutionContext` →
`useLayoutRenderingContext()`, but the settings page never provided a
`LayoutRenderingProvider`
- Every other render site (side panel, command menu, record pages) wraps
`FrontComponentRenderer` with this provider — it was just missed here
- Opening the "Custom" tab in Settings → Applications crashes with:
`LayoutRenderingContext Context not found`
- Fix: wrap with `LayoutRenderingProvider` using `DASHBOARD` layout type
and no target record, matching the pattern used in
`SidePanelFrontComponentPage`
## Test plan
- [ ] Open Settings → Applications → any app with a custom settings tab
- [ ] Click the "Custom" tab — should render the front component without
crashing
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Charles Bochet <charles@twenty.com>
This regressed with the migration work
Type checker should've caught it, but didn't because of `Partial`
changed it to `Pick` instead to avoid future cases
/closes https://github.com/twentyhq/twenty/issues/19745
Add Storybook stories and example components to test event forwarding
through the front component renderer: form events (text input, checkbox,
focus/blur, submit), keyboard events (key/code/modifiers), and host API
calls (navigate, snackbar, progress, close panel)
Resolves the following two issues. The reason we had these issues was
because the maxLimit of renderers was reached and the browser started
losing context before restoring it. Now, we make sure the context that
is lost belongs to visuals that are not currently in the viewport and
when those visuals come back into viewport, they're loaded again.
https://github.com/twentyhq/core-team-issues/issues/2372https://github.com/twentyhq/core-team-issues/issues/2373
## Summary
- Bump `twenty-sdk` from `1.22.0-canary.6` to `1.22.0`
- Bump `twenty-client-sdk` from `1.22.0-canary.6` to `1.22.0`
- Bump `create-twenty-app` from `1.22.0-canary.6` to `1.22.0`
Fix isUnique not sent in field update mutation: Added isUnique and
settings to the Pick type in useUpdateOneFieldMetadataItem, which
previously silently dropped these properties from the GraphQL payload.
Invalidate cache on failed migration rollback: Added invalidateCache()
call in the migration runner's catch block so that a failed migration
(e.g., unique index creation failing due to duplicate data) regenerates
the Redis hash, preventing stale metadata from persisting in frontend
localStorage indefinitely. Was
## Summary
- import shared company logos and people avatars into
`packages/twenty-website-new/public/images/shared`
- expand the shared asset registry with the new local logo and avatar
paths
- update the home hero data to use local shared assets for matched
people and company icons
- replace synthetic or mismatched hero avatars with better-fitting named
or anonymous local assets
- prefer local shared company logos in hero visual components before
falling back to remote icons
## Testing
- Not run (not requested)
---------
Co-authored-by: Abdullah <125115953+mabdullahabaid@users.noreply.github.com>
## Summary
Route-triggered logic functions were returning empty 500 responses with
zero server-side logging when the Lambda build chain failed. This PR
makes those failures observable and returns meaningful HTTP responses to
API clients.
- **Observability** — Log errors (with stack traces) at each layer of
the execution chain: `LambdaDriver` (deps-layer fetch, SDK-layer fetch,
invocation), `LogicFunctionExecutorService`, and `RouteTriggerService`.
- **Typed exceptions** — Replace raw `throw error` sites with
`LogicFunctionException` carrying an appropriate code and
`userFriendlyMessage` (new codes: `LOGIC_FUNCTION_EXECUTION_FAILED`,
`LOGIC_FUNCTION_LAYER_BUILD_FAILED`).
- **Correct HTTP semantics** — `RouteTriggerService` maps inner
exception codes to the right `RouteTriggerExceptionCode` so
`LOGIC_FUNCTION_NOT_FOUND` returns 404 and `RATE_LIMIT_EXCEEDED` returns
429 (new code + filter case) instead of a generic 500.
- **User-facing messages** — Forward the inner
`CustomException.userFriendlyMessage` when wrapping into
`RouteTriggerException`, without leaking raw internal error text into
the public exception message.
- **Infra** — Bump Lambda ephemeral storage from 2048 to 4096 MB to
prevent `ENOSPC` errors during yarn install layer builds (root cause of
the original silent failures).
## Summary
Fixes error snackbars disappearing too quickly by **disabling
auto-dismiss for error variants by default**. Error snackbars now remain
visible until the user closes them.
Closes#19694
## What changed
- Error snackbars no longer default to a 6s timeout (they only
auto-dismiss if an explicit `duration` is provided).
- Non-error snackbars keep the existing default auto-dismiss behavior
(6s).
- Progress bar animation/visibility is tied to auto-dismiss (no progress
bar when there’s no duration).
**Files**
-
packages/twenty-front/src/modules/ui/feedback/snack-bar-manager/components/SnackBar.tsx
## How to test
1. Trigger a long error snackbar (example: spreadsheet/CSV import error
with a long message).
2. Confirm the snackbar **stays visible** until clicking **Close**.
3. Trigger a success/info snackbar and confirm it **still
auto-dismisses** after ~6s.
## Notes
- Call sites that explicitly pass `options.duration` for error snackbars
will continue to auto-dismiss (intentional).
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
Summary
- Remove the evictViewMetadataForViewIds step from the page-layout reset
flow. It synchronously cleared viewFields/viewFieldGroups rows from the
metadata store, leaving a window where
useFieldsWidgetGroups saw a view with no fields and fell back to
buildDefaultFieldsWidgetGroups, briefly rendering a synthetic "General"
+ "Other" layout before the real reset defaults
arrived.
- invalidateMetadataStore() alone is sufficient: it marks the
collections stale and triggers MinimalMetadataLoadEffect to refetch,
which replaces current atomically. The UI now
transitions old-layout → new-default with no synthetic flash.
- Simplified refreshPageLayoutAfterReset to no longer take a
collectAffectedViewIds callback, and updated both tab/widget reset call
sites plus ObjectLayout.tsx accordingly.
- Deleted the now-unused evictViewMetadataForViewIds and
collectViewIdsFromWidgets utils.
## Summary
### Problem
The upgrade migration system required new workspaces to always start
from a workspace command, which was too rigid. When the system was
mid-upgrade within an instance command (IC) segment, workspace creation
would fail or produce inconsistent state.
### Solution
#### Workspace-scoped instance command rows
Instance commands now write upgrade migration rows for **all
active/suspended workspaces** alongside the global row. This means every
workspace has a complete migration history, including instance command
records.
- `InstanceCommandRunnerService` reloads `activeOrSuspendedWorkspaceIds`
immediately before writing records (both success and failure paths) to
mitigate race conditions with concurrent workspace creation.
- `recordUpgradeMigration` in `UpgradeMigrationService` accepts a
discriminated union over `status`, handles `error: unknown` formatting
internally, and writes global + workspace rows in batch.
#### Flexible initial cursor for new workspaces
`getInitialCursorForNewWorkspace` now accepts the last **attempted**
(not just completed) instance command with its status:
- If the IC is `completed` and the next step is a workspace segment →
cursor is set to the last WC of that segment (existing behavior).
- If the IC is `failed` or not the last of its segment → cursor is set
to that IC itself, preserving its status.
This allows workspaces to be created at any point during the upgrade
lifecycle, including mid-IC-segment and after IC failure.
#### Relaxed workspace segment validation
`validateWorkspaceCursorsAreInWorkspaceSegment` accepts workspaces whose
cursor is:
1. Within the current workspace segment, OR
2. At the immediately preceding instance command with `completed` status
(handles the `-w` single-workspace upgrade scenario).
Workspaces with cursors in a previous segment, ahead of the current
segment, or at a preceding IC with `failed` status are rejected.
### Test plan
created empty workspaces to allow testing upgrade with several active
workspaces
## Summary
- The `AddStandalonePageFastInstanceCommand` migration was failing with:
`default for column "type" cannot be cast automatically to type
core."navigationMenuItem_type_enum"`
- The migration was missing `DROP DEFAULT` / `SET DEFAULT` around the
`ALTER COLUMN TYPE` for `navigationMenuItem.type`. PostgreSQL cannot
automatically cast the existing default (`'VIEW'::old_enum`) to the new
enum type.
- The same 3-step pattern (DROP DEFAULT → ALTER TYPE → SET DEFAULT) was
already correctly applied for `pageLayout.type` in the same migration —
this fix brings `navigationMenuItem.type` in line.
- Add 72 missing HTML and SVG elements to the remote-dom component
registry (48 HTML + 24 SVG), bringing the total from 47 to 119 supported
elements
- HTML additions include semantic inline text (b, i, u, s, mark, sub,
sup, kbd, etc.), description lists, ruby annotations, structural
elements (figure, details, dialog), and form utilities (fieldset,
progress, meter, optgroup)
- SVG additions include containers (svg, g, defs), shapes (path, circle,
rect, line, polygon), text (text, tspan), gradients (linearGradient,
radialGradient, stop), and utilities (clipPath, mask, foreignObject,
marker)
- Add htmlTag override to support SVG elements with camelCase names
(e.g. clipPath, foreignObject) while keeping custom element tags
lowercase per the Web Components spec
## Summary
- Introduce a `activeNavigationMenuItemState` Jotai atom (persisted via
localStorage) to disambiguate active navigation items when multiple
items share the same URL
- Add active item evaluation for record show pages with three scenarios:
1. Navigating from a nav item → parent stays active + dedicated RECORD
item also active
2. Clicking a dedicated RECORD nav item → only that item active
3. Navigating via search/direct link → OBJECT nav item fallback, or
Opened section if none exists
- Extract shared active logic into `isNavigationMenuItemActive` utility
to eliminate duplication between orphan items and folder items
- Support multiple simultaneously active items within folders via
`Set<number>` instead of a single index
---------
Co-authored-by: Etienne <45695613+etiennejouan@users.noreply.github.com>
## Context
Tab duplication was broken after the view creation logic was deferred
and moved to the FE.
- Duplicating a tab or a FIELDS widget now produces a fully independent
copy: new view, new view field groups, new view fields — all with fresh
IDs — while preserving any unsaved edits
from the source widget.
- Removed the backend auto-seed of default view fields / view field
groups in ViewService.createOne for FIELDS_WIDGET views. The frontend
always sends the complete layout via
upsertFieldsWidget, so the auto-seed was both redundant and the source
of potential bugs.
- Extracted a shared useDuplicateFieldsWidgetForPageLayout hook used by
both tab and widget duplication paths, plus a small
useCloneViewInMetadataStore helper that clones the FlatView
in the metadata store and returns the copied flat view fields/groups for
the caller.
Fix a stack overflow (Maximum call stack size exceeded) in
getAllStepIdsInLoop caused by an If/Else branch inside an iterator loop
pointing back to the enclosing iterator. The traversal incorrectly
treated the enclosing iterator as a nested iterator, calling
getAllStepIdsInLoop recursively with fresh visited sets, causing
infinite recursion.
Add the enclosing iterator's own ID to the skip condition in
traverseSteps so back-edges from If/Else branches are handled the same
way as back-edges from regular nextStepIds.
<img width="1054" height="723" alt="Capture d’écran 2026-04-15 à 11 00
42"
src="https://github.com/user-attachments/assets/aee1477b-5059-4552-809e-7c8a34a9ec4a"
/>
## Summary
- replace static home card imagery with code-driven visuals for the
familiar interface, fast path, and live data cards
- refine the live data card interaction details, including hover states,
animated cursors, filter chips, table styling, and inline tag editing
- add shared company logos, people avatars, and updated partner
testimonial assets to support the new marketing visuals
- refresh related home, partner, case study, signoff, testimonials, and
design-system content/components to align with the updated marketing
presentation
## Testing
- `npx tsc -p tsconfig.json --noEmit` in `packages/twenty-website-new`
Co-authored-by: Abdullah <125115953+mabdullahabaid@users.noreply.github.com>
## Summary
Introduces a dedicated **metadata** mutation to update **standard
(non-custom)** workspace member settings, moves profile-related UI to
use it, and aligns **workspace member** record permissions with the rest
of the CRM so users cannot escalate visibility via RLS by editing their
own member record.
## Product behaviour
### Profile and appearance (standard fields)
- Users can still update **their own** standard workspace member fields
that the product exposes in **Settings / Profile** (e.g. name, locale,
color scheme, avatar flow) via the new
**`updateWorkspaceMemberSettings`** mutation.
- The mutation returns a **boolean**; the app **merges** the updated
fields into local state so the UI stays in sync without refetching the
full workspace member record.
- **Locale** changes also keep **`userWorkspace`** in sync when a locale
is present in the payload (including from the workspace `updateOne` path
when applicable).
### Custom fields on workspace members
- The dedicated metadata mutation **rejects** any **custom** workspace
member field (and unknown keys). Those updates must go through the
normal **object** `updateOne` pipeline, which is subject to **object-
and field-level** permissions like other records. But since we don't
have object- and field-level permission configuration for system objects
yet, this permission is derived from Workspace member settings
permission.
- **Workspace member** is no longer exempt from ORM permission
validation for updates merely because it is a **system** object. Users
who **do not** have workspace member access (e.g. no **Workspace
members** settings permission and no equivalent broad settings access on
the role) **cannot** use `updateOne` on `workspaceMember` to change
**custom** (or other) fields on their own row—even though that row is
used for RLS predicates.
- This closes a path where someone could widen what they can see by
writing to fields that drive row-level rules.
### Who can change another member
- Updating **another** user’s workspace member still requires
**Workspace members** (or equivalent) settings permission, consistent
with admin tooling.
## What changed
This refactor fixes AI chat error surfacing by aligning both the backend
and frontend with the existing GraphQL error architecture instead of
adding AI-local error translation.
On the backend:
- add a dedicated GraphQL billing exception path
- register billing GraphQL handling globally for GraphQL requests
- reuse the existing AI GraphQL interceptor path for agent/chat
exceptions
- keep billing status classification shared between REST and GraphQL
- remove the earlier attempt to preserve `CustomException` metadata in
the global GraphQL fallback
On the frontend:
- keep the original Apollo GraphQL error object in AI chat state
- reuse shared Apollo/GraphQL helpers for user-facing messages and
error-type checks
- delete AI-specific error extraction helpers that duplicated generic
GraphQL parsing
- replace a few direct `extensions.subCode` call sites with a shared
predicate
## Why it changed
The original bug was that `BillingException` and AI exceptions thrown
from chat were not being translated into GraphQL errors with the
expected `extensions.subCode` and `extensions.userFriendlyMessage`, so
the AI chat UI had nothing structured to inspect.
An intermediate fix worked mechanically but pushed `CustomException`
handling into the global GraphQL fallback, which blurred the intended
layering. This PR moves the behavior back to explicit GraphQL edges.
## Root cause
`AgentChatResolver` could throw `BillingException` and `AgentException`,
but:
- billing had a REST exception filter and no shared GraphQL equivalent
- AI chat was not consistently using the same GraphQL exception
translation path as the sibling AI resolver
- the frontend chat UI had drifted into AI-specific error parsing
instead of consuming the same structured Apollo errors as the rest of
the app
## Impact
- `BILLING_CREDITS_EXHAUSTED` is now preserved through GraphQL and can
render the existing credits-exhausted UI in chat
- `API_KEY_NOT_CONFIGURED` is preserved through the AI GraphQL path
- AI chat now follows the same general GraphQL error consumption pattern
as the rest of the frontend
- billing GraphQL handling is less dependent on individual resolver
authors remembering to add a filter
## Validation
- `yarn jest --config packages/twenty-server/jest.config.mjs
packages/twenty-server/src/engine/core-modules/billing/utils/__tests__/billing-graphql-api-exception-handler.util.spec.ts
packages/twenty-server/src/engine/metadata-modules/ai/ai-agent/utils/__tests__/agent-graphql-api-exception-handler.util.spec.ts`
- `yarn jest --config packages/twenty-front/jest.config.mjs
packages/twenty-front/src/utils/__tests__/is-graphql-error-of-type.util.test.ts`
- `npx oxlint --type-aware ...` on touched backend/frontend files
- `npx prettier --check ...` on touched backend/frontend files
## Follow-up ideas
- consolidate frontend GraphQL error helpers further so more existing
direct `extensions.subCode` checks move to shared utilities
- consider whether common GraphQL exception filter registration should
live in a more explicit GraphQL-specific module instead of
`CoreEngineModule`
- add an end-to-end test for a real `sendChatMessage` GraphQL failure
path in AI chat
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add support for standalone pages: a new `PageLayout` type
(`STANDALONE_PAGE`) that can be rendered independently at
`/page/:pageLayoutId`, not tied to any record or object context.
- New `STANDALONE_PAGE` page layout type
- New `PAGE_LAYOUT` navigation menu item type: adds a `pageLayoutId`
foreign key to `NavigationMenuItemEntity`, allowing sidebar items to
link directly to standalone pages
- New `GLOBAL_OBJECT_CONTEXT` command menu availability type: separates
object-context-dependent commands (Create Record, Import, Export, See
Deleted, Create View, Hide Deleted) from truly global ones, so
standalone pages only show relevant commands
- Frontend routing & rendering: adds a `/page/:pageLayoutId` route with
its own page component, header, and command menu
- Widget rendering refactor
- Instance commands: two fast 1.22 migrations: `pageLayoutId` column +
`STANDALONE_PAGE` enum, and `GLOBAL_OBJECT_CONTEXT` availability type
enum
- Workspace command: backfills existing command menu items from `GLOBAL`
to `GLOBAL_OBJECT_CONTEXT` where appropriate
- Dev seeds: adds a sample "Star History" standalone page with an iframe
widget for local development
- Add resetPageLayoutToDefault GraphQL mutation that resets an entire
page layout (all tabs, widgets, view field groups, and view fields) to
their default state in a single operation
- Add a "Reset to default" button in the settings Layout tab
(/settings/objects/:object#layout) with a confirmation modal
<img width="673" height="426" alt="Screenshot 2026-04-14 at 13 13 53"
src="https://github.com/user-attachments/assets/002d33c9-9ea1-49f2-bef6-179ce034c126"
/>
## Context
Expending field widget supported types to all field types. They will all
by default fallback to "FIELD" displaymode which is the inline display
mode (same as the one used in FIELDS widget) and can be extended to
other displayMode such as CARD or EDITOR in the future if needed. Most
of the work was already done in the previous PR and was unnecessary
filter out until this was properly tested
<img width="951" height="757" alt="Screenshot 2026-04-14 at 13 24 24"
src="https://github.com/user-attachments/assets/a08a1855-21ea-4e75-8032-4f970b3ff50d"
/>
<img width="915" height="595" alt="Screenshot 2026-04-14 at 13 23 34"
src="https://github.com/user-attachments/assets/8b420c1f-0496-4c1c-91b8-b266c40b3772"
/>
## Summary
- extract and expose `street` from Google place details (`street_number`
+ `route`) on the server DTO
- request and type `street` in front-end geo-map place details query
- use `placeData.street` as the preferred value for `addressStreet1` in
address autofill
- add regression coverage for query fields and street-line precedence
behavior
## Why
Address autocomplete selection currently writes full place text
(including city/state/postcode/country) into `addressStreet1`,
duplicating values already mapped to dedicated fields.
Fixes#18860
---------
Signed-off-by: jeevan6996 <jeevanpawar5890@gmail.com>
Co-authored-by: Charles Bochet <charles@twenty.com>
## Context
Moving isActive filtering to the frontend for page layout tabs and
widgets, hiding inactive entities from the UI while keeping them in
state for future reactivation
Next we will implement deactivated standard tab re-activation during tab
creation (cc @Devessier)
<img width="234" height="303" alt="📋 Menu (Slots)"
src="https://github.com/user-attachments/assets/17a25ac6-55e2-4778-b7f0-e7554ed69704"
/>
fixes https://github.com/twentyhq/twenty/issues/19543
+ bonus bug : when deleting an advanced filter, it triggers a destroy
which cascade-deletes associated view filters. Then, view filters
deletion throws.
- Refactored prefillWorkflowCommandMenuItems and
prefillFrontComponentCommandMenuItems to use
validateBuildAndRunWorkspaceMigration instead of raw TypeORM
createQueryBuilder inserts
- This ensures the flat entity cache is properly updated when seeding
command menu items, fixing the Quick Lead item not appearing after
workspace creation
- Moved command menu item prefill calls outside the transaction since
they now go through the migration pipeline
# Introduction
We were allowing the sequence to be empty in the worker context that was
facing an edge case importing the UpgradeModule through the
WorkspaceModule god module, no commands were discovered and it was
throwing as the sequence must have at least one workspace commands to
allow a workspace creation
Though the issue was also applicable to the twenty-server `AppModule`
too that was not discovering any commands
## Integration tests were passing
The integration test were importing the `CommandModule` at the nest
testing app creating leading to asymmetric testing context
It was a requirement for a legacy commands import and global assignation
## Fix
The `UpgradeModule` now import both `WorkspaceCommandsProviderModule`
and `InstanceCommandProviderModule` which ships the commands directly in
the module
We could consider moving the commands into the `engine/upgrade` folder
## Concern
Bootstrap could become more and more long to load at both server and
worker start
When this becomes a problem we will have to only import the latest
workspace command or whatever
For the moment this is not worth it the risk to import not the latest
workspace command
## Summary
After uploading an image/file to the empty avatar field in the person
tab, the edit icon next to the field would not appear until the browser
was refreshed or another field was clicked.
### Root cause
- When user clicks over the avatar field,
`recordFieldListCellEditModePosition` is set to `globalIndex`
- That is fine when a avatar already exists. But when there is no avatar
already set, the native file picker is opened with no `onClose` handler
attached.
- So after the file upload is completed,
`recordFieldListCellEditModePosition` is never reset to null.
- `FieldsWidgetCellEditModePortal` stays anchored to the avatar file
element
- When the user hovers over the same field again, its hover portal tries
to compete to anchor for the same element
- So, `RecordInlineCellDisplayMode ` (the edit button) doesn't render
### Fix
- Pass the `onClose` function through `openFieldInput` to
`openFilesFieldInput`
- `onClose` resets `recordFieldListCellEditModePosition` back to null,
when the upload completes.
## Before
https://github.com/user-attachments/assets/ac9318e9-5471-434c-8af3-5c20d0112460
## After
https://github.com/user-attachments/assets/0d064a7f-95ad-4b92-a9ee-d9570f360972Fixes#19595
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
## Summary
The `role` @ResolveField on `ApiKeyResolver` calls `getRolesByApiKeys`
with a single-element array per API key. When a query returns N API
keys, this produces N separate DB queries to resolve their roles.
This adds an `apiKeyRoleLoader` to the existing DataLoader
infrastructure. All API key IDs in a single GraphQL request are
collected and resolved in one batched query.
- Before: N queries (one per API key)
- After: 1 query (batched via DataLoader)
## Changes
- `dataloader.service.ts` - new `createApiKeyRoleLoader` method,
delegates to `ApiKeyRoleService.getRolesByApiKeys`
- `dataloader.interface.ts` - `apiKeyRoleLoader` added to `IDataloaders`
- `dataloader.module.ts` - import `ApiKeyModule` so `ApiKeyRoleService`
is available
- `api-key.resolver.ts` - `role()` now uses
`context.loaders.apiKeyRoleLoader.load()` instead of calling the service
directly
## Test plan
- [ ] Verify `apiKeys { id role { label } }` query returns the same
results as before
- [ ] Confirm only 1 role_target query fires regardless of how many API
keys are returned
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
Summary
This PR fixes a bug in the `useColorScheme` test suite and removes a
lingering `FIXME` comment where the color scheme was unexpectedly
unsetting during state updates.
Root Cause
Previously, the Jotai state was being initialized *inside* the
`renderHook` callback using `useSetAtomState`. When the `setColorScheme`
function was called, it triggered a hook re-render, which caused the
callback to execute again and overwrite the new state with the hardcoded
`'System'` initial state.
The Fix
- Removed the state initialization from inside the render cycle.
- Bootstrapped the state on a fresh store using `resetJotaiStore()` and
`store.set()` *before* rendering the hook.
- Updated the mock `workspaceMember` to correctly use the
`CurrentWorkspaceMember` type.
- Removed the `FIXME` comment and successfully asserted that the color
scheme updates to `'Dark'`.
Testing
Ran tests locally to confirm the fix works as expected:
`corepack yarn jest --config packages/twenty-front/jest.config.mjs
--testPathPattern=useColorScheme.test.tsx`
---------
Co-authored-by: Srabani Ghorai <subhojit04ghorai@gmail.com>
## Summary
- refresh pricing page content, plan cards, CTA styling, and Salesforce
comparison visuals
- update partner and hero/testimonial visuals, including pulled
carousel-compatible partner testimonial data
- improve halftone export and illustration mounting flows, plus related
button and hydration fixes
- add updated website illustration and pricing assets
## Testing
- Not run (not requested)
---------
Co-authored-by: Abdullah <125115953+mabdullahabaid@users.noreply.github.com>
Handle late TwentyORM workspace-not-found exceptions in the workflow
webhook REST exception filter so deleted workspaces return a 404 instead
of surfacing as internal errors.
Also add a focused regression spec covering the deleted-workspace ORM
codes and the existing workflow-trigger status mappings.
Closes#15544
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
## Summary
- Removes the `IS_RECORD_TABLE_WIDGET_ENABLED` feature flag, making the
record table widget unconditionally available in dashboard widget type
selection
- The flag was already seeded as `true` for all new workspaces and only
gated UI visibility in one component
(`SidePanelPageLayoutDashboardWidgetTypeSelect`)
- Cleans up the flag from `FeatureFlagKey` enum, dev seeder, and test
mocks
## Analysis
The flag only controlled whether the "View" (Record Table) widget option
appeared in the dashboard widget type selector. The entire record table
widget infrastructure (rendering, creation hooks, GraphQL types,
`RECORD_TABLE` enum in `WidgetType`) is independent of the flag and
fully implemented. No backend logic depends on this flag.
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Charles Bochet <charles@twenty.com>
## Summary
Implements a ClickHouse-backed polling system to enforce metered-credit
caps for workflow executions, replacing reliance on Stripe billing
alerts. The system re-evaluates tier caps against live pricing on every
poll cycle, allowing price/tier changes to propagate immediately without
recreating Stripe alert objects.
## Key Changes
- **BillingUsageCapService**: New service that queries ClickHouse for
current-period credit usage and evaluates whether a subscription has
reached its metered-credit allowance (tier cap + credit balance)
- `isClickHouseEnabled()`: Checks if ClickHouse is configured
- `getCurrentPeriodCreditsUsed()`: Sums creditsUsedMicro from usageEvent
table for a workspace within a billing period
- `evaluateCap()`: Determines if usage has reached the allowance by
reading live pricing from the subscription
- **EnforceUsageCapJob**: Cron job that polls all active subscriptions
and updates `hasReachedCurrentPeriodCap` on metered items
- Runs every 2 minutes to keep cap enforcement in sync with live usage
- Supports shadow mode (log-only) via
`BILLING_USAGE_CAP_CLICKHOUSE_ENABLED` flag for safe rollout
- Continues processing after per-subscription errors with detailed
logging
- **EnforceUsageCapCronCommand**: CLI command to register the
enforcement cron job
- **MeteredCreditService**: Extracted
`extractMeteredPricingInfoFromSubscription()` as a pure function for
callers that already hold the subscription with pricing loaded, avoiding
redundant DB queries
- **Configuration**: Added `BILLING_USAGE_CAP_CLICKHOUSE_ENABLED` flag
to control enforcement mode (active vs. shadow)
- **Constants**: Added `METERED_OPERATION_TYPES` to define which
operation types count toward the metered product's credit cap
## Implementation Details
- The service queries ClickHouse for the sum of `creditsUsedMicro` in
the current billing period, matching Stripe meter semantics
- Pricing is re-read on every evaluation, so tier changes propagate
within one poll cycle without Stripe alert recreation
- The cron job only updates the database when the cap state actually
changes (no-op if already in the correct state)
- Shadow mode allows safe validation before enabling enforcement;
transitions are logged but not persisted
- Comprehensive test coverage for both the service and cron job,
including error handling and state transitions
https://claude.ai/code/session_01VksTSrYLXJVCPVBQhQdBTe
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Removes the `workspaceId` column, `@Index()`, and `@ManyToOne`
workspace relation from `BillingSubscriptionItemEntity`
- The entity defined these fields but they don't exist in the actual
database table, causing `column X.workspaceId does not exist` errors at
runtime
- The workspace relationship is already accessible through the parent
`BillingSubscriptionEntity`
## PR description
- The command menu in the side panel now reads directly from
`MAIN_CONTEXT_STORE_INSTANCE_ID` instead of snapshotting the main
context store into a separate side-panel instance when opening. This
keeps the command menu always in sync with the current page state
(selection, filters, view, etc.).
- Removed the broadening/reset-to-selection feature (Backspace to clear
context, "Reset to" button) since the command menu no longer maintains
its own copy of the context.
## Video QA
https://github.com/user-attachments/assets/5d5bc664-b6d4-431d-a271-6ce23d8a4ae0
# Introduction
Index seemed to be missing in production only, as it's blocking the
whole migration release on other env that already implements the index
**Merge records fix:**
selectPriorityFieldValue throws when merging records if the priority
record has no value for a field (e.g., null/empty) but 2+ other records
do. The recordsWithValues array is pre-filtered to only records with
non-empty values, so the priority record isn't in the list. The fix:
instead of throwing, fall back to null since this is the priority record
actual value
**Duplicated IDs fix**
https://github.com/user-attachments/assets/bd6d7d08-d079-49a5-aad4-740b59a3c246
When applying a filter that reduces the record count, the virtualized
table's record ID array keeps stale entries from the previous larger
result set. loadRecordsToVirtualRows clones the old array (e.g., 60
entries) and only overwrites the first N positions (e.g., 9) with the
new filtered results, leaving positions 9-59 with old IDs. If any old ID
matches a new one, it appears twice in the selection, causing "-> 2
selected" for a single click and a duplicate ID in the merge mutation
payload. The fix: clear the record IDs array in
useTriggerInitialRecordTableDataLoad before repopulating it with fresh
data.
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
## Introduction
As the new upgrade sequence engine is released in `1.22` it requires all
workspaces to be in `1.21.0` which mean they will have a cursor on the
sequence
As if if someone upgrades from `1.20` to `1.22` no `upgradeMigration`
will exist and throw a pretty basic `Could not find any cursor, database
might not been initialized correctly`
Here we allow a meaningful error
Cleans up the code quality by migrating from Raw SQL to TypeORM
entities. The previous implementation was necessary to do cross‑schema
table joins but since we've migrated to the core schema we don't need it
anymore.
- Also extracted `toIsoStringOrNull` to a utility it was duplicated
several times
- Moved `isThrottled` logic from job handler to cron enqueuer
## Introduction
In the same validate build and run we should be able to delete a view
field targetting a label identifier and at the same create one that
repoints to it again without failing any validation
Leading for this valdiation rule to be moved in the cross entity
validation steps
## Summary
- refresh the partner hero visual and testimonial presentation,
including the partner-specific carousel and illustration assets
- switch the testimonials top notch to the masked rendering approach
used elsewhere for more precise shape control
- extend halftone studio/export support and related geometry/state
handling used by the updated partner visuals
- include supporting website UI adjustments across navigation, pricing,
plans, and Salesforce-related sections
## Testing
- Not run (not requested)
This PR introduces more updates to the website, such as real
testimonials, case studies, copy of pricing plans table. It also adds
modals for "Talk to Us" and "Become a Partner".
## Summary
- fix the halftone studio image-switch behavior so image mode uses a
sane default preview distance instead of rendering nearly off-screen
- add shared preview-distance handling for shape and image modes, and
tune the default 3D idle auto-rotate speed
- update halftone controls/export plumbing to support the latest studio
settings changes
- refresh website UI/content in pricing, Salesforce, menu, and
billing-related sections
## Testing
- Ran targeted Jest tests for halftone state and footprint logic
- Ran TypeScript check for `packages/twenty-website-new`
- Broader app-level/manual testing not run
## Summary
- Replace per-file `setupFiles` + manual
`appBuild`/`appDeploy`/`appInstall` with vitest `globalSetup` that runs
`appDevOnce` once for the entire suite and `appUninstall` in teardown
- Add `fileParallelism: false` to prevent shared-state collisions
between test files
- Replace `app-install.integration-test.ts` with
`schema.integration-test.ts` that verifies app installation, custom
object schema (fields/relations), and CRUD
- Add reusable test helpers (`client.ts`, `metadata.ts`, `mutations.ts`)
- Applied to both `create-twenty-app` template and `postcard` example
# Introduction
Refactoring the upgrade engine to handle cross version upgrade,
completely getting rid of the semver `version` at db and runtime level
It remains a visual a listing indicator for or CD process but also
during devenv in order to prepare next release
Will write a release process runbook documentation on how to handle
upgrade step patch, command insertion etc as it needs to be cascaded
across all the involved supported version
**The upgrade sequence model:**
The sequence is a flat, ordered array of upgrade steps
(`UpgradeStep[]`), built from the registry by chaining all versions in
order, each version contributing its fast-instance → slow-instance →
workspace commands sorted by timestamp. Version is metadata for logging,
not used in the algorithm.
**Segments:**
The sequence naturally splits into alternating segments of contiguous
instance steps and contiguous workspace steps. The runner processes
segments in order:
- **Instance segment:** Run sequentially from the instance cursor. Each
step runs once globally.
- **Workspace segment:** Each workspace independently walks from its own
cursor through the end of the segment. Workspaces are independent within
a segment — they can be at different positions.
- **Synchronization (workspace → instance):** The runner blocks before
entering an instance segment. All active/suspended workspaces must have
completed the last workspace step of the preceding workspace segment. If
any workspace failed, abort. This is the only explicit synchronization
point.
- Instance → workspace ordering is implicit — the runner processes
segments sequentially, so the instance segment naturally completes
before the workspace segment begins.
full docs
https://gist.github.com/prastoin/e62106d455fd72d6b6ebada8351e5492
## Version constants & type-level deprecation
Version management is split into three atomic constants:
`TWENTY_PREVIOUS_VERSIONS`, `TWENTY_CURRENT_VERSION`, and
`TWENTY_NEXT_VERSIONS`. Two derived constants compose them:
`CROSS_UPGRADE_SUPPORTED_VERSIONS` (previous + current — what the engine
runs) and `ALL_TWENTY_VERSIONS` (the full ordered tuple including next).
The registry service validates at module init that no version is
duplicated across constants and that at least one previous version
exists.
A `DeprecatedSinceVersion<RemoveAtVersion, T>` type utility resolves to
`T` while `TWENTY_CURRENT_VERSION` is below `RemoveAtVersion`, and to
`never` once it reaches it — turning deprecation into a compile-time
guarantee via `IndexOf` and `IsGreaterOrEqual` generics in
`twenty-shared`.
### `workspace.version` column deprecation
The column is replaced by cursor-based state inference from
`UpgradeMigration` records, but cannot be dropped in 1.22: workspaces
activated during 1.21 predate the cursor system and need their initial
cursor backfilled first (`backfillWorkspaceCreatedIn1_21_0Cursors`).
This backfill itself depends on a new `isInitial` column on
`UpgradeMigration`, bootstrapped via a targeted TypeORM migration before
the upgrade sequence runs.
Both functions and the entity field are typed with
`DeprecatedSinceVersion<'1.23.0', ...>`. When `TWENTY_CURRENT_VERSION`
reaches `1.23.0`, compile errors force their removal — and the
pre-declared `DropWorkspaceVersionColumnFastInstanceCommand` takes over
to drop the column.
## What's next
- ci cross version upgrade ( wip )
- banner asking to contact twenty administrator if workspace is outdated
- upgrade healthcheck cli
## New unit/integ test pattern
Create a dedicated `createNestApp` that consumes a real database in
order not to have to mack any database interaction to the
`upgradeMigrations` allowing full coverage of the whole
`upgradeRunnerService.run` core logic
## Introduction
View standard backfill seed has been introduced during a partial patch,
a window exists where some workspaces got created without the view and
wasn't included in the upgrade process
The fix isn't covering all the view that has been missed during that
time, only the one required for the command to pass
## Overview
Adds comprehensive admin panel functionality for viewing workspace
details and AI chat threads.
## Changes
### Frontend
- **New Routes**: Added `AdminPanelWorkspaceDetail` and
`AdminPanelWorkspaceChatThread` pages with lazy loading
- **New Queries**:
- `getAdminWorkspaceChatThreads` - fetch chat threads for a workspace
- `getAdminChatThreadMessages` - fetch messages for a specific thread
- `workspaceLookupAdminPanel` - lookup workspace info and users
- **New Components**:
- `SettingsAdminWorkspaceDetail` - displays workspace info and chat
sessions tabs
- `SettingsAdminWorkspaceChatThread` - renders chat conversation with
message bubbles
- **Navigation**: Updated AI admin panel to link to workspace detail
pages
- **Settings Paths**: Added `AdminPanelWorkspaceDetail` and
`AdminPanelWorkspaceChatThread` paths
### Backend
- **New DTOs**:
- `AdminWorkspaceChatThreadDTO` - workspace chat thread data
- `AdminChatThreadMessagesDTO` - thread with messages
- `AdminChatMessageDTO` - individual message with parts
- **New Resolvers**: Added three queries to `AdminPanelResolver`
- **New Service Methods**:
- `workspaceLookup()` - fetch workspace info
- `getWorkspaceChatThreads()` - list chat threads
- `getChatThreadMessages()` - fetch thread messages with validation
- **Module Updates**: Added entity imports for workspace, user, AI chat,
and feature flag data
### Security
- Added `allowImpersonation` check before accessing chat data
- Validates workspace ownership and access permissions
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
fixes: #19571
## What's happening
BlockNote's `.bn-panel` (the Upload/Embed popup for images, videos,
audio) has a hardcoded `width: 500px` in the upstream
`blocknoteStyles.css`. When you open it inside the side panel (~350px
wide), it overflows past the container's `overflow: hidden` and gets
clipped.
## Why percentage-based fixes don't work here
I initially tried `max-width: 100%` on `.bn-panel` but it doesn't work
the panel sits inside a Floating UI popover that's absolutely positioned
with no explicit width. Its width comes from its content (the 500px
panel), so `max-width: 100%` just resolves to that same 500px. It's a
circular reference.
## The fix
```css
& .bn-mantine {
container-type: inline-size;
}
& .bn-mantine .bn-panel {
width: min(500px, 100cqi);
box-sizing: border-box;
}
```
container-type: inline-size on .bn-mantine lets us reference the
editor's actual width using cqi units. min(500px, 100cqi) keeps the
panel at 500px in wide containers (main note view) and shrinks it to fit
in narrow ones (side panel).
I scoped container-type to .bn-mantine (BlockNote's own root wrapper)
rather than the outer StyledEditor div, so the containment context stays
within BlockNote's own tree.
#Before
<img width="373" height="530" alt="image"
src="https://github.com/user-attachments/assets/63fb407c-79d1-4faa-afb4-15a98fe4ba22"
/>
#After
<img width="332" height="597" alt="image"
src="https://github.com/user-attachments/assets/f7e91c28-ad35-44a5-9450-0b6d52714977"
/>
<img width="599" height="579" alt="image"
src="https://github.com/user-attachments/assets/20fd457b-2d06-4799-9164-f993ef37d833"
/>
Tested
Side panel: panel fits correctly, matches "Add image" button width
Full-width editor: panel stays at 500px
Slash menu, formatting toolbar, drag handles: all unaffected
---------
Co-authored-by: Félix Malfait <felix.malfait@gmail.com>
- Defer backend view creation for FIELDS widgets to prevent orphan views
when users cancel or leave before saving. Views are now created
optimistically in Jotai state (client-side UUID) and only persisted to
the database when the page layout is saved.
- Align the frontend's default field groups fallback with the backend
logic: split into General/Other groups, hide relation fields by default,
and filter invisible fields consistently across all code paths.
## Summary
- Follows up on #19610 which exported view/page-layout types but missed
field settings types
- Adds re-exports for `DateDisplayFormat`, `NumberDataType`, and
`FieldMetadataSettingsOnClickAction` from the SDK public API
- These enums are referenced in the return type of `defineObject` (via
field settings types like `FieldMetadataSettingsDate`,
`FieldMetadataSettingsNumber`, `FieldMetadataSettingsMultiItem`) but
were not publicly exported
- This causes TS4082 ("private name") errors for SDK consumers that have
`declaration: true` in their tsconfig, specifically when using
`defineObject` with fields that have settings (e.g. SELECT, DATE, NUMBER
fields)
## Summary
- Fixes TS4082 errors for SDK consumers with `declaration: true` in
their tsconfig
- The `define*` functions (e.g. `defineView`, `definePageLayout`,
`defineLogicFunction`) return `ValidationResult<T>` where `T` references
types from `twenty-shared` that were not re-exported from the SDK's
public barrel
- TypeScript cannot generate `.d.ts` files when the return type
references "private names" — types that exist in the package but aren't
publicly exported
### Added re-exports
**Enums** (used in `ViewManifest`, `HttpRouteTriggerSettings`):
- `ViewType`, `ViewFilterOperand`, `ViewFilterGroupLogicalOperator`,
`ViewOpenRecordIn`, `ViewVisibility`
- `HTTPMethod`
**Types** (used in `PageLayoutWidgetManifest`, `LogicFunctionConfig`):
- `GridPosition`, `PageLayoutWidgetConditionalDisplay`
- `InputJsonSchema`
## Summary
- Adds `isUnique?: boolean` to `RegularFieldManifest` in
`twenty-shared`, allowing SDK applications to declare unique constraints
on fields
- Updates the manifest-to-flat-field converter to read `isUnique` from
the manifest instead of hardcoding `false`
- Generates corresponding unique index metadata in
`computeApplicationManifestAllUniversalFlatEntityMaps` when a field has
`isUnique: true`, matching the behavior of the `CreateFieldInput` path
- Adds SDK-side validation rejecting `isUnique` on RELATION,
MORPH_RELATION, and FILES field types
- Adds integration test verifying manifest sync creates a unique index
for `isUnique` fields
- Adds SDK unit tests for `isUnique` validation on unsupported field
types
## Test plan
- [x] SDK unit tests: `defineField` accepts `isUnique: true` on TEXT,
rejects on RELATION and FILES
- [ ] Integration test: manifest sync with `isUnique: true` creates the
unique index in DB
- [ ] Verify `isUnique` defaults to `false` when not specified (backward
compatible)
- [ ] Verify standalone manifest fields (not nested in objects) also
generate unique indexes correctly
## Summary
- refine `/halftone` controls for material selection and slower 3D
rotation tuning
- switch the default halftone material to solid for new sessions
- include related halftone rendering, export, and glass material updates
across the website app
## Testing
- `yarn nx run twenty-website-new:typecheck`
- `yarn prettier
packages/twenty-website-new/src/app/halftone/_components/controls/AnimationsTab.tsx
--check`
- `yarn prettier
packages/twenty-website-new/src/app/halftone/_components/controls/controls-ui.tsx
packages/twenty-website-new/src/app/halftone/_components/controls/DesignTab.tsx
--check`
- `yarn prettier
packages/twenty-website-new/src/app/halftone/_lib/state.ts --check`
## Summary
- update pricing card styling to use a 4px border radius
- make pricing card CTAs fill the available card width
- reduce spacing below the pricing cards section
- fix pricing copy casing and punctuation, including the tailor-made
plan banner
- make engagement band headings support a configurable size and default
them to the smaller variant
## Testing
- Not run (not requested)
## Summary
- After a successful `uninstall`, clear all app registration config
(`appRegistrationId`, `appRegistrationClientId`, `appAccessToken`,
`appRefreshToken`) so the next `dev` run doesn't hit "app not found"
errors from leftover credentials
- On app re-registration (`ensureAppRegistration`), clear stale
`appAccessToken`/`appRefreshToken` that belonged to the previous
registration
- On OAuth token refresh failure, only clear config when the server
returns `invalid_client` (registration was deleted), not on transient
failures like network issues
- Extract a shared `parse-server-error` utility for consistently parsing
both GraphQL error codes (`extensions.code`/`subCode`) and OAuth error
responses (`error`/`error_description`)
## Summary
- `twenty-shared` is private and never published to npm, so SDK
consumers couldn't resolve type imports like `from
'twenty-shared/types'` in the generated `.d.ts` files
- Replace `vite.config.sdk.ts` with `rollup-plugin-dts` which compiles
`src/sdk/index.ts` directly into a self-contained `dist/sdk/index.d.ts`
with all `twenty-shared` types inlined
- The JS output from `vite.config.sdk.ts` (`dist/sdk/*.js`) was unused —
the main export already maps to `dist/index.mjs` from the node Vite
config
## Changes
- **Deleted** `vite.config.sdk.ts` — its preserved-module JS output
wasn't referenced by any `package.json` export
- **Added** `rollup.config.dts.mjs` — uses `rollup-plugin-dts` to
compile SDK types from source with `twenty-shared` inlined (~850ms)
- **Updated** `project.json` — build/dev/build:sdk targets now use
rollup instead of the removed vite config
- **Updated** `tsconfig.json` — removed `vite.config.sdk.ts` from
include
## Test plan
- [ ] Run `npx nx build twenty-sdk` and verify `dist/sdk/index.d.ts`
contains no `twenty-shared` references
- [ ] Verify `dist/index.mjs` and `dist/index.cjs` are still produced
correctly
- [ ] Verify CLI (`dist/cli.cjs`) still works
- [ ] Verify `npx nx build:sdk twenty-sdk` works standalone
Made with [Cursor](https://cursor.com)
## Summary
- Fix `syncApplication` crashing with `ENTITY_NOT_FOUND` when a
navigation menu item child (with `folderUniversalIdentifier`) appears
before its folder in the manifest's `navigationMenuItems` array
- Add a generic topological sort in
`WorkspaceEntityMigrationBuilderService.validateAndBuild()` that detects
self-referential FKs from `ALL_MANY_TO_ONE_METADATA_RELATIONS` and
ensures parents are created before children
- This also covers other self-referential entities:
`viewFilterGroup.parentViewFilterGroup`,
`fieldMetadata.relationTargetFieldMetadata`, and
`rowLevelPermissionPredicateGroup.parentRowLevelPermissionPredicateGroup`
## Root cause
The builder iterated `createdFlatEntityMaps.byUniversalIdentifier` in
insertion order (from the manifest). Validation passed because it
checked both optimistic maps and remaining-to-create maps. But the
runner processed create actions sequentially, so
`resolveUniversalRelationIdentifiersToIds` threw when the folder hadn't
been created yet.
## Summary
- Enable application log console output in the `twenty-app-dev` Docker
image by adding `APPLICATION_LOG_DRIVER=CONSOLE` to the Dockerfile ENV
block
- Rename `APPLICATION_LOG_DRIVER_TYPE` to `APPLICATION_LOG_DRIVER` and
`ApplicationLogDriverType` to `ApplicationLogDriver` for consistency
with all other driver config variables (`EMAIL_DRIVER`, `LOGGER_DRIVER`,
`CAPTCHA_DRIVER`, etc.)
- Add `APPLICATION_LOG_DRIVER` to `.env.example`
## Summary
Refactored AI chat state management to use component family states
instead of global atoms. This change enables proper state isolation per
chat thread, preventing state leakage between different chat instances.
## Key Changes
- **Converted global atoms to component family states:**
- `agentChatErrorState` → `agentChatErrorComponentFamilyState`
- `agentChatIsStreamingState` →
`agentChatIsStreamingComponentFamilyState`
- `agentChatFirstLiveSeqState` →
`agentChatFirstLiveSeqComponentFamilyState`
- `agentChatHandleEventCallbackState` →
`agentChatHandleEventCallbackComponentFamilyState`
- `agentChatUsageState` → `agentChatUsageComponentFamilyState`
- `currentAIChatThreadTitleState` →
`currentAIChatThreadTitleComponentFamilyState`
- **Updated state access patterns:**
- Introduced `useAtomComponentFamilyStateCallbackState` hook to get
family state callbacks
- Changed from direct atom access to family key-based access using `{
threadId }` as the family key
- Updated all components and hooks to use the new family state patterns
- **Removed instance ID dependency:**
- Eliminated `AGENT_CHAT_INSTANCE_ID` constant usage
- Simplified state access by using thread ID directly as the family key
- **Updated affected components and hooks:**
- `useAgentChatSubscription`: Now creates family state atoms per thread
- `AgentChatMessagesFetchEffect`: Uses family state callbacks for event
handling
- `AgentChatThreadInitializationEffect`: Sets family state values per
thread
- `useAIChatThreadClick`: Updates family states when switching threads
- `AIChatErrorUnderMessageList`, `AIChatLastMessageWithStreamingState`,
`AIChatEmptyState`, `AIChatStandaloneError`, `SendMessageButton`,
`AIChatContextUsageButton`, `SidePanelAskAIInfo`: Updated to use family
state values with thread ID
## Implementation Details
- All new component family states use
`AgentChatComponentInstanceContext` for proper component-level isolation
- Family key structure: `{ threadId: string | null }`
- Removed `disposed` flag logic as it's no longer needed with proper
state isolation
- Updated dependency arrays in hooks to include family callback
functions
https://claude.ai/code/session_01D8syiJtCXu5bEHSr5KYkCt
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
This PR enhances the handling of sensitive configuration variables by
improving masking logic and user experience when editing them. It adds
metadata-aware masking for dynamically marked sensitive variables and
clears sensitive values when entering edit mode.
## Key Changes
- **Backend masking improvements**: Updated `maskSensitiveValue()` to
accept metadata parameter, enabling masking of variables marked as
sensitive via metadata (not just predefined masking config). Sensitive
non-string values are masked as `********`.
- **Edit mode UX**: When editing a sensitive variable, the value field
is now cleared on entering edit mode to prevent exposing masked values
and ensure users intentionally provide new secret values.
- **Form state tracking**: Enhanced `useConfigVariableForm()` hook to
accept an `isEditing` parameter and properly track value changes for
sensitive variables during edit operations.
- **Input placeholder**: Updated placeholder text for sensitive variable
inputs to show `Enter a new secret value` instead of the generic
database storage message, providing clearer intent to users.
## Implementation Details
- The `maskSensitiveValue()` method now checks both predefined masking
configurations and runtime metadata to determine if a value should be
masked
- Sensitive string values use LAST_N_CHARS strategy (4 characters),
while non-string values are masked uniformly
- The form's `hasValueChanged` flag is set to true when editing
sensitive variables to ensure proper validation and submission handling
https://claude.ai/code/session_01JiTckmuVJMQWpJ7TUGwsqb
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Bump `twenty-sdk`, `twenty-client-sdk`, and `create-twenty-app` from
`1.22.0-canary.2` to `1.22.0-canary.3` for publishing.
## Test plan
- Version-only change, no code modifications.
Made with [Cursor](https://cursor.com)
## Summary
- The `sdk-e2e-test` CI job has been failing since at least April 9th
because the nx dependency graph runs `database:reset` and
`start:ci-if-needed` in **parallel** — neither depends on the other. The
server starts before the DB tables are created, crashes with `relation
"core.keyValuePair" does not exist`, and `wait-on` times out after 10
minutes.
- Replace the `nx-affected` orchestration for e2e with explicit
**sequential** CI steps: build → create DB → reset DB → start server →
wait for health → run tests.
- Add server log dump on failure for easier future debugging.
## Root cause
In `packages/twenty-sdk/project.json`, the `test:e2e` target has:
```json
"dependsOn": [
"build",
{ "target": "database:reset", "projects": "twenty-server" },
{ "target": "start:ci-if-needed", "projects": "twenty-server" }
]
```
Since `database:reset` and `start:ci-if-needed` don't depend on each
other, nx can (and does) run them concurrently. `start:ci-if-needed`
fires `nohup nest start &` and immediately completes. The server process
tries to query `core.keyValuePair` before `database:reset` creates it →
crash → wait-on timeout → job failure.
## Summary
Enhanced the documentation and user guidance for null/empty value
filters across all field types by updating the `NullCheckEnum`
descriptions in the field filter Zod schemas. The descriptions now
provide clear, actionable guidance on how to use the "NULL" and
"NOT_NULL" filter options.
## Changes
- Updated 20+ field type filter descriptions to replace generic "Is null
or not null" text with more descriptive guidance
- New descriptions follow the pattern: "Check for missing or empty
[field type]. Use "NULL" to find records with no [value], "NOT_NULL" for
records with a [value]"
- Applied consistent messaging across all field types including:
- Basic types: UUID, text, number, boolean, date
- Complex types: select, multi-select, rating
- Composite types: amount, currency, first name, last name, street,
city, country, email, phone, link URL
- Relationship and JSON fields
- Specialized descriptions for composite fields (e.g., "Check for
missing or empty amount" for currency amount field)
## Benefits
- Improved API documentation clarity for developers using field filters
- Reduced ambiguity about NULL vs NOT_NULL filter behavior
- Better discoverability of filtering capabilities through schema
descriptions
- Consistent messaging across all field types for improved user
experience
https://claude.ai/code/session_0139Nb5bZykacNE69GQsFSrr
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- **Fix `createApplicationRegistration` flow**: The server's
`createApplicationRegistration` mutation returns a `clientSecret`, not
`accessToken`/`refreshToken` directly. The SDK now correctly requests
`clientSecret` and immediately performs an OAuth `client_credentials`
exchange to obtain `appAccessToken` and `appRefreshToken`, then stores
them in config.
- **New `exchangeCredentialsForTokens` helper**: Shared by both `dev`
and `dev --once` flows. Takes `clientId` + `clientSecret`, calls
`/oauth/token` with `client_credentials` grant, and persists the
resulting tokens.
- **Bump `twenty-sdk`, `twenty-client-sdk`, `create-twenty-app` to
`1.22.0-canary.2`**
## Context
The `1.22.0-canary.1` SDK release expected
`createApplicationRegistration` to return `accessToken`/`refreshToken`
directly, but the `v1.22.0` server returns `clientSecret`. This caused
`yarn twenty dev` and `yarn twenty dev --once` to fail with "No
registration found" errors.
## Summary
- Add Playwright inspection scripts for problem canvas export, chain
logging, and metrics capture
- Save updated markdown snapshots and screenshot artifacts for homepage
and problem section debugging
- Include generated browser profile data used during the investigation
## Testing
- Not run (not requested)
## Summary
- Bumps `twenty-sdk`, `twenty-client-sdk`, and `create-twenty-app`
package versions from `0.9.0` to `1.22.0-canary.1` for the 1.22 canary
release.
## Test plan
- [ ] Verify packages build successfully (`npx nx build twenty-sdk`,
`npx nx build twenty-client-sdk`, `npx nx build create-twenty-app`)
- [ ] Verify `create-twenty-app` scaffolds new apps with the correct SDK
version
Made with [Cursor](https://cursor.com)
## Summary
- **SDK (`dev` & `dev --once`)**: After app registration, the CLI now
obtains an `APPLICATION_ACCESS` token via `client_credentials` grant
using the app's own `clientId`/`clientSecret`, and uses that token for
CoreApiClient schema introspection — instead of the user's
`config.accessToken` which returns the full unscoped schema.
- **Config**: `oauthClientSecret` is now persisted alongside
`oauthClientId` in `~/.twenty/config.json` when creating a new app
registration, so subsequent `dev`/`dev --once` runs can obtain fresh app
tokens without re-registration.
- **CI action**: `spawn-twenty-app-dev-test` now outputs a proper
`API_KEY` JWT (signed with the seeded dev workspace secret) instead of
the previous hardcoded `ACCESS` token — giving consumers a real API key
rather than a user session token.
## Motivation
When developing Twenty apps, `yarn twenty dev` was using the CLI user's
OAuth token for GraphQL schema introspection during CoreApiClient
generation. This token (type `ACCESS`) has no `applicationId` claim, so
the server returns the **full workspace schema** — including all objects
— rather than the scoped schema the app should see at runtime (filtered
by `applicationId`).
This caused a discrepancy: the generated CoreApiClient contained fields
the app couldn't actually query at runtime with its `APPLICATION_ACCESS`
token.
By switching to `client_credentials` grant, the SDK now introspects with
the same token type the app will use in production, ensuring the
generated client accurately reflects the app's runtime capabilities.
## Summary
This PR removes the `IS_USAGE_ANALYTICS_ENABLED` feature flag and makes
usage analytics features universally available. The feature flag guard
has been removed from the usage analytics resolver and all conditional
rendering based on this flag has been eliminated.
## Key Changes
- **Removed feature flag dependency**: Deleted
`IS_USAGE_ANALYTICS_ENABLED` from the `FeatureFlagKey` enum in
`twenty-shared`
- **Updated AI Usage tab**: Simplified `SettingsAIUsageTab` to remove
enterprise access checks and feature flag conditionals, now only checks
if ClickHouse is configured
- **Updated Usage Analytics section**: Removed feature flag guard from
`SettingsUsageAnalyticsSection` and added loading/empty state handling
- **Updated AI settings navigation**: Made the Usage tab always visible
in the AI settings tabs, removing conditional rendering based on feature
flag
- **Updated Billing Credits section**: Removed feature flag check before
showing the "View usage" button
- **Updated Settings routes**: Removed `SettingsProtectedRouteWrapper`
with feature flag requirement from usage routes
- **Updated GraphQL resolver**: Removed `@RequireFeatureFlag` decorator
and `FeatureFlagGuard` from the `getUsageAnalytics` query
- **Updated dev seeder**: Removed the feature flag seed entry for
`IS_USAGE_ANALYTICS_ENABLED`
## Implementation Details
- Usage analytics now gracefully handles loading states with
`UsageSectionSkeleton`
- Empty state messaging is shown when no usage data is available yet
- ClickHouse configuration remains the only requirement for usage
analytics functionality
- All enterprise-specific gating for AI usage analytics has been removed
in favor of ClickHouse availability checks
https://claude.ai/code/session_01MRFVXtquL3wS7qmQkDU3AT
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
Fixes the side panel close animation cleanup
(`sidePanelCloseAnimationCompleteCleanup`) never running during the
close transition.
This eventually fixes the issue where in navbar edit mode, clicking on a
nav item sometimes doesn't get selected, the side panel opens but shows
`Select a navigation item to edit` instead of the selected item's
settings. The root cause was stale `isSidePanelClosing` state from a
previous close that wasn't cleaned up, causing the next open to run
cleanup synchronously (without emitting the close event), which
interfered with the navigation item selection flow.
### Root cause
The CSS `transition` on `StyledSidePanelWrapper` used
`${themeCssVariables.animation.duration.normal}s`, which Linaria
converts to a CSS custom property value like `width
var(--t-animation-duration-normal)s`. After CSS variable substitution,
`0.3` and `s` become two separate tokens, not a valid `<time>` dimension
(`0.3s`). The browser rejects the entire transition declaration, falls
back to `all 0s`, and the width change happens instantly with no
`transitionend` event.
### Fix
- Use `calc(var(--t-animation-duration-normal) * 1s)` to properly
construct the `<time>` value (consistent with ~30 other usages in the
codebase).
- Filter `handleTransitionEnd` to only process `width` transitions on
the wrapper element itself, ignoring bubbled child `background-color`
transitions.
## Before
https://github.com/user-attachments/assets/496ccbff-dc96-487d-a994-451dbf2a5165
## After
https://github.com/user-attachments/assets/78255240-1d7d-42cd-9b56-25627613883a
## Summary
This PR refactors the AI chat thread state management to use `null`
instead of a sentinel string value (`AGENT_CHAT_UNKNOWN_THREAD_ID`) to
represent an uninitialized or new chat thread. This improves type safety
and makes the code more idiomatic by using `null` to represent the
absence of a value.
## Key Changes
- **Removed sentinel constant**: Deleted `AGENT_CHAT_UNKNOWN_THREAD_ID`
constant and replaced all usages with `null`
- **Updated state types**: Changed `currentAIChatThreadState`,
`agentChatLastDiffSyncedThreadState`, and
`agentChatDisplayedThreadState` to use `string | null` type with `null`
as default value
- **Updated component family states**: Modified message-related state
families to accept `threadId: string | null` instead of `threadId:
string`
- **Refined null checks**: Added explicit `null` checks in:
- `AgentChatMessagesFetchEffect`: Updated `isNewThread` logic to check
for `null` first
- `useAIChatThreadClick` and `useSwitchToNewAIChat`: Added guards to
only save drafts when `currentAIChatThread !== null`
- `AgentChatThreadInitializationEffect`: Added null check before UUID
validation
- `useEnsureAgentChatThreadIdForSend`: Added null check before comparing
with draft key
- **Updated fallback logic**: Used nullish coalescing operator (`??`) in
`AIChatTab` and `useAIChatEditor` to default to
`AGENT_CHAT_NEW_THREAD_DRAFT_KEY` when thread is null
- **Enhanced refetch safety**: Added early return in
`handleRefetchMessages` to prevent refetching when in new thread state
## Implementation Details
- The change maintains backward compatibility by treating `null` the
same way the code previously treated `AGENT_CHAT_UNKNOWN_THREAD_ID`
- All draft saving operations now safely check for null before
attempting to store drafts
- The nullish coalescing pattern (`currentAIChatThread ??
AGENT_CHAT_NEW_THREAD_DRAFT_KEY`) ensures proper fallback behavior when
accessing draft storage
https://claude.ai/code/session_01Pz8KCygSNgBPYsndbMq8f7
---------
Co-authored-by: Claude <noreply@anthropic.com>
Fixes
https://twenty-v7.sentry.io/issues/7398810663/?project=4507072563183616
## Bug description
The mergeMultipleRecords command's conditionalAvailabilityExpression
used numberOfSelectedRecords >= 2, which evaluates correctly in both
selection and exclusion (Select All) modes. However, when the command
executes, buildHeadlessCommandContextApi returns an empty
selectedRecords array in exclusion mode because it can't synchronously
resolve record IDs from an "all minus excluded" set. This caused
MergeMultipleRecordsCommand to throw on the empty array guard.
## Fix
Prepended not isSelectAll and to the merge command's conditional
expression, preventing it from appearing when records are selected via
Select All.
When creating a db from scrath we still need to run the slow instance
command so the associated queries are still applied to the empty db
Please note that by default if there's no workspace the instance slow
runner will skip the runDataMigration part
# Introduction
Avoid having a time window where the logic function is unavailable while
being built, by implementing an update flow instead of delete and create
everytime
fixes https://twenty-v7.sentry.io/issues/7371110591/
**Note: executor code propagation (pre-existing limitation)**
The Lambda executor shim
(`logic-function-drivers/constants/executor/index.mjs`) is only deployed
when a Lambda is first created. If this file is edited, the change won't
propagate to existing Lambdas — neither before nor after this PR. A
follow-up could compare the deployed `CodeSha256` against a local
checksum to detect drift and trigger a code update via
`UpdateFunctionCodeCommand`.
Should implem as code versioning or checksum diffing
## Summary
- Added `flex: 1` to `StyledColumnContainer` in `RecordBoardColumns.tsx`
- The column wrapper wasn't stretching vertically, so the droppable area
only covered the top of each column where cards existed
- Now the entire column height is a valid drop target
## Test plan
- [ ] Open a board/kanban view
- [ ] Drag a card from one column
- [ ] Drop it in the lower/empty area of another column
- [ ] Verify the drop registers correctly
Fixes#18842
## Summary
Improved error handling in the StreamAgentChatJob to properly propagate
and reject errors that occur during the agent chat stream processing.
## Key Changes
- Added error re-throw in the catch block to ensure errors are not
silently swallowed
- Introduced `streamError` variable to capture errors from the stream's
`onError` callback
- Modified the promise resolution logic to reject with the captured
stream error instead of silently resolving on success, ensuring proper
error propagation to callers
## Implementation Details
The changes ensure that errors occurring during stream processing are
properly captured and propagated:
1. Stream errors are now captured in the `onError` callback and stored
in the `streamError` variable
2. After the stream completes and the message is persisted to the
database, the promise is rejected if a stream error occurred
3. The outer catch block now re-throws errors to prevent silent failures
This fix prevents scenarios where stream processing errors would be
masked by a successful database persistence operation.
https://claude.ai/code/session_016DQodL32HYv6CR1cMASNHZ
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Removes all workspace schema definitions for `Favorite` and
`FavoriteFolder` entities, which have been fully migrated to
`NavigationMenuItems`
- Deletes 26 standalone files including workspace entities, NestJS
modules, services, listeners, jobs, standard application builders (field
metadata, views, view fields, view field groups, indexes, page layouts),
mocks, and integration tests
- Cleans up ~40 modified files: removes `favorites` relation from 10
workspace entities and their field metadata utils, removes entries from
all builder maps, shared constants (`STANDARD_OBJECTS`,
`CoreObjectNameSingular`, `DEFAULT_RELATIONS_OBJECTS_STANDARD_IDS`), SDK
default relations, AI tool filtering, and standard object icons
## Summary
Aligns **object metadata** icons with the **tinted tile** look
everywhere we show a workspace object, and **retires** the
navigation-only `NavigationMenuItemStyleIcon` wrapper in favor of
**shared** UI primitives under `@/ui/display` and `@/object-metadata`.
## What changed
### Global tinted icon building blocks (`@/ui/display`)
- **`TintedIconTile`** / **`StyledTintedIconTileContainer`** support
optional **`size`** and **`stroke`**, and grow the tile when **`size`**
is set so layouts match previous `theme.icon` usage.
- Shared helpers and constants for theme color parsing and tinted
backgrounds/borders/icon color (e.g.
**`getTintedIconTileStyleFromColor`**, **`parseThemeColor`**,
**`getColorFromTheme`**, related constants).
### Object metadata icon (`@/object-metadata`)
- **`ObjectMetadataIcon`** composes **`TintedIconTile`** with
**`getObjectColorWithFallback`**, forwards optional **`size`** /
**`stroke`** for **visual parity** with old `getIcon` + explicit sizing.
- **`getSelectOptionIconFromObjectMetadataItem`** returns an
**`IconComponent`** for selects/menus that expect a component, not a
React node.
### Navigation module cleanup
- **Removed** **`NavigationMenuItemStyleIcon`**; call sites use
**`ObjectMetadataIcon`**, **`TintedIconTile`**, and/or the shared
**`getTintedIconTileStyleFromColor`** pipeline so the same treatment is
**not** tied to the navigation package.
- **`NavigationMenuItemIcon`**, view/link overlays, DnD handle, and
sidebar editor flows updated to use the shared pattern where they render
object (or tinted) icons.
### Product surfaces updated (non-exhaustive)
- **Settings:** role object picker/rows, data model
tables/graph/overview, object preview summary, webhooks entity list,
morph relation multiselect.
- **Workflows:** create/update/delete/upsert/find records, triggers,
filters, variables dropdowns, AI agent object rows, object dropdowns.
- **Shell:** side panel object filter / data sources / folder chrome
where object icons appear.
- **Records:** index header icon, show breadcrumb styling.
- **Activity:** timeline event icon when linked object metadata applies.
- **`NavigationDrawerItem`:** tinted branch aligned with shared
**`TintedIconTile`** behavior.
---------
Co-authored-by: Etienne <45695613+etiennejouan@users.noreply.github.com>
## Context
Emails and calendars can only be associated with specific objects
(companies, people...) and we were adding those tabs for all custom
objects. I'm removing these from the default config
## Summary
- refresh the website halftone studio with new rendering controls, UI
updates, and export/runtime plumbing
- improve halftone output quality by opening highlights, grouping
shadows, and reducing visible row artifacts
- update home/problem illustration assets, 3D model references, and
related illustration components to use the new visuals
- adjust footer and layout-related website wiring to support the updated
illustration experience
## Testing
- Not run (not requested)
---------
Co-authored-by: Abdullah <125115953+mabdullahabaid@users.noreply.github.com>
- Removes the intermediate `CommandMenuItemConfig` /
`CommandConfigContext` / `CommandMenuItemDisplay` abstraction layers,
replacing them with a single `CommandMenuItemRenderer` that renders
directly from the command menu items from the backend
- Eliminates the server-items/ subdirectory by moving its contents
(hooks/, contexts/, states/, display/, edit/) up into the parent
command-menu-item/ module, removing an unnecessary nesting level.
## Changes
### Email Attachments
- Added `EmailAttachmentsField` component for uploading and managing
email attachments
- New `useUploadEmailAttachment` hook for handling file uploads with
size validation
- New `UPLOAD_EMAIL_ATTACHMENT_FILE` mutation for backend file
persistence
- Integrated attachments into email composer with file validation
- Added `EmailRecipientLimits` constant to enforce max recipients (100)
on frontend
### Open in App Click Action
- New `useOpenEmailInAppOrFallback` hook to open emails in the in-app
composer
- Email fields now default to "Open in app" action instead of "Open as
link"
- New `SettingsDataModelFieldOnClickActionForm` support for
`OPEN_IN_APP` action
- Email secondary table cell button now offers in-app composer as
alternative action
- `AttachmentChip` component moved from advanced-text-editor to file
module for reuse
### Refactoring & New Utilities
- Extracted `useComposeEmailForTargetRecord` hook for consistent email
composer opening
- New `useResolveDefaultEmailRecipient` hook to resolve recipient based
on record type
- New `getPrimaryEmailFromRecord` utility for safe email field access
- New `EmptyInboxPlaceholder` component with CTA button
- Simplified `ComposeEmailButton` using new hooks
- Enhanced `ComposeEmailCommand` to support bulk Person selections
- Updated `useSendEmail` to accept and forward attachments
- Recipient count validation with warning in composer footer
### Backend
- New `FileEmailAttachmentModule` with resolver and service
- New `file-email-attachment.command` for record selection menu items
- Updated `SendEmailInput` GraphQL type to include `files` field
- Email tool constants and exceptions updated
### Type Updates
- Added `SendEmailAttachmentInput` GraphQL type
- Added `FileFolder.EMAIL_ATTACHMENT` to file folder interface
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
## Summary
- Add SSE (`text/event-stream`) as an alternative response format on
`POST /mcp` per the MCP streamable-http spec
- When clients send `Accept: text/event-stream`, the server responds
with SSE wire format; otherwise returns JSON as before (fully backwards
compatible)
- Emit a `notifications/progress` SSE event before tool execution to
signal long-running operations
- No sessions, no GET SSE, no protocol version bump — this is
transport-level only (Phase 2)
## Changes
- **New**: `write-sse-event.util.ts` — writes correctly formatted SSE
events to Express Response
- **New**: `mcp-progress-notification.const.ts` — constants for progress
notification method and token prefix
- **Modified**: `mcp-core.controller.ts` — checks `Accept` header,
branches into SSE vs JSON response path
- **Modified**: `mcp-protocol.service.ts` — passes optional `sseWriter`
callback to tool executor
- **Modified**: `mcp-tool-executor.service.ts` — emits progress
notification via `sseWriter` before tool execution
- **Tests**: Unit tests for SSE utility, controller SSE/JSON paths, tool
executor progress notifications, and integration tests for SSE streaming
## Test plan
- [x] Unit tests: `writeSseEvent` utility produces correct SSE wire
format
- [x] Unit tests: Controller returns SSE headers and writes events when
`Accept: text/event-stream`
- [x] Unit tests: Controller returns JSON when `Accept:
application/json` only
- [x] Unit tests: Notifications (no `id`) return 202 regardless of
Accept header
- [x] Unit tests: Tool executor emits progress notification via
sseWriter
- [x] Unit tests: Tool executor works without sseWriter (backwards
compatible)
- [x] Integration tests: SSE response for ping, JSON fallback, progress
notification before tool call
- [x] All 503 test suites pass, typecheck clean
https://claude.ai/code/session_01QrqjBUXePJkPMd6gBAoWaR
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- **Drop the `objectMetadata.dataSourceId` foreign key and index** via a
1-22 fast instance command — column kept nullable for data preservation
- **Delete `DataSourceService`, `DataSourceModule`, and
`DataSourceException`** — all code now uses `workspace.databaseSchema`
directly
- **Remove `IS_DATASOURCE_MIGRATED` feature flag** from default flags
and all branching logic
- **Simplify workspace/object creation pipelines** —
`WorkspaceManagerService`, `DevSeederService`, and the object creation
action handler no longer route through `DataSourceService`
- **Keep `DataSourceEntity` and the `dataSource` table** for historical
data — entity stripped of all ORM relations
Automated daily sync of `ai-providers.json` from
[models.dev](https://models.dev).
This PR updates pricing, context windows, and model availability based
on the latest data.
New models meeting inclusion criteria (tool calling, pricing data,
context limits) are added automatically.
Deprecated models are detected based on cost-efficiency within the same
model family.
**Please review before merging** — verify no critical models were
incorrectly deprecated.
Co-authored-by: FelixMalfait <6399865+FelixMalfait@users.noreply.github.com>
## Summary
Fix MCP `/mcp` transport handling for clients using `streamable-http`.
## Changes
- add explicit `GET /mcp` and `DELETE /mcp` handlers
- return `405 Method Not Allowed` with `Allow: POST`
- keep `POST /mcp` protected by MCP auth guards
- mark `GET` and `DELETE` as intentionally public with
`PublicEndpointGuard` + `NoPermissionGuard`
- update the advertised MCP protocol version to `2025-03-26`
- add unit and integration coverage for the new behavior
## Why
The frontend advertises the MCP server as `streamable-http`, but the
backend only effectively handled `POST /mcp`. Some MCP clients probe
`GET /mcp` during connection setup, so unsupported methods need explicit
method-level responses instead of falling through or being blocked
before the handler.
## Validation
- verified locally:
- `GET /mcp` -> `405`
- `DELETE /mcp` -> `405`
- unauthenticated `POST /mcp` -> `401`
- passed controller unit tests
- added integration assertions for `GET` and `DELETE`
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Félix Malfait <felix.malfait@gmail.com>
## Summary
This PR adds support for attaching files to agent chat messages. Users
can now upload files when sending messages to the AI agent, and these
files are properly processed, stored, and made available to the agent
with signed URLs.
## Key Changes
- **File attachment input**: Added `fileIds` parameter to the
`sendChatMessage` GraphQL mutation to accept file IDs from the client
- **File processing**: Implemented `buildFilePartsFromIds()` method to
convert file IDs into file UI parts with signed URLs
- **Message composition**: Updated user messages to include both text
and file parts when files are attached
- **File URL signing**: Integrated `FileUrlService` to generate signed
URLs for files in the AgentChat folder, ensuring secure access
- **Message persistence**: Files are now included in the message parts
stored in the database and retrieved when loading conversation history
- **File metadata mapping**: Enhanced `mapDBPartToUIMessagePart()` to
properly extract MIME types from file entities and include file IDs
## Implementation Details
- Files are fetched from the database using the provided file IDs and
workspace context
- Each file is converted to an `ExtendedFileUIPart` with proper metadata
(filename, MIME type, signed URL, and file ID)
- When loading messages from the database, file parts are enhanced with
signed URLs to ensure they remain accessible
- The `loadMessagesFromDB()` method now requires the workspace ID to
properly sign file URLs
- File attachments are seamlessly integrated into the existing message
part system alongside text content
https://claude.ai/code/session_01TAdN1gBzeiYELX4XDrrYY1
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
- removes pre-install function
- execute **asyncrhonously** post-install function at application
installation
- add optional `shouldRunOnVersionUpgrade` boolean value on post-install
function definition default false
- update PostInstallPayload to
```
export type PostInstallPayload = {
previousVersion?: string;
newVersion: string;
};
```
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
## Summary
Refactored the `AgentChatMessageUIToolCallPart` output structure to
flatten the nested result object, simplifying the data model and
improving code clarity.
## Key Changes
- **Flattened output structure**: Moved `success`, `message`, and
`result` fields from `output.result` directly to `output`, eliminating
unnecessary nesting
- **Removed `toolName` field**: Deleted the unused `toolName` property
from the output object
- **Made fields optional**: Changed `error` and `result` to optional
fields to better represent cases where they may not be present
- **Updated hook logic**: Modified `useProcessUIToolCallMessage` to
access the flattened structure:
- Changed `toolExecutionPart.output.result.success` to
`toolExecutionPart.output.success`
- Changed `toolExecutionPart.output.result.result` to
`toolExecutionPart.output.result`
- Added null check for `navigateAppOutput` to handle cases where result
is undefined
## Implementation Details
The refactoring maintains backward compatibility in functionality while
simplifying the data structure. The optional `result` field now properly
reflects that navigation output may not always be present, with an
explicit guard clause added to handle this case gracefully.
https://claude.ai/code/session_01EnYKwVaCJyhgNPsi7p3YSq
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
This PR adds a database migration command to backfill standard skills
for existing workspaces and improves the skill loading tool to provide
dynamic, workspace-specific skill availability information instead of
hardcoded skill names.
## Key Changes
- **New Migration Command**: Added `BackfillStandardSkillsCommand`
(v1.22.0) that:
- Identifies missing standard skills in existing workspaces
- Compares workspace skills against the standard skill definitions
- Creates missing skills using the workspace migration service
- Supports dry-run mode for safe testing
- Properly logs all operations and handles failures
- **Enhanced Skill Loading Tool**: Updated `createLoadSkillTool` to:
- Accept a new `listAvailableSkillNames` function parameter
- Dynamically fetch available skills from the workspace instead of using
hardcoded skill names
- Provide accurate, context-aware error messages when skills are not
found
- Gracefully handle workspaces with no available skills
- **Service Updates**: Modified skill tool implementations in:
- `McpProtocolService`: Integrated `findAllFlatSkills` to list available
skills
- `ChatExecutionService`: Integrated `findAllFlatSkills` to list
available skills
- **Module Registration**: Added `BackfillStandardSkillsCommand` to the
v1.22 upgrade module
- **Test Updates**: Updated `McpProtocolService` tests to mock the new
`findAllFlatSkills` method
## Implementation Details
The backfill command uses the existing workspace migration
infrastructure to safely create skills, ensuring consistency with other
metadata operations. The skill availability messaging now reflects the
actual skills present in each workspace, improving user experience when
skills are not found.
https://claude.ai/code/session_012fXeP3bysaEgWsbkyu4ism
Co-authored-by: Claude <noreply@anthropic.com>
## Context
- Extend the FIELD widget to support RICH_TEXT fields alongside existing
RELATION/MORPH_RELATION fields
- Add EDITOR display mode that renders a full rich text editor, and
FIELD display mode that shows a compact single-line preview
- Enforce mutual exclusivity: EDITOR is only available for RICH_TEXT,
CARD only for RELATION, FIELD for both
- Refactor widget configuration into a centralized FIELD_WIDGET_CONFIG
constant and shared useFieldWidgetEligibleFields hook for better
scalability. Later we might use discriminative union from graphql
scehma)
<details>
<summary>
EDITOR mode
</summary>
<img width="1095" height="731" alt="Screenshot 2026-04-09 at 17 31 41"
src="https://github.com/user-attachments/assets/cebffd0e-07ea-4f74-a3dc-ef987daa17ea"
/>
</details>
<details>
<summary>
FIELD mode
</summary>
<img width="986" height="378" alt="Screenshot 2026-04-09 at 17 35 00"
src="https://github.com/user-attachments/assets/c90a8046-fdd0-4321-8ba6-f47d89e9d42a"
/>
</details>
<details>
<summary>
Open FIELD mode
</summary>
<img width="758" height="480" alt="Screenshot 2026-04-09 at 17 35 04"
src="https://github.com/user-attachments/assets/c53cc120-0b6f-47a0-808c-27b26f9f53ec"
/>
</details>
## Summary
- System objects with valid views were incorrectly hidden in the View >
System objects picker
- The system picker page used a different filter (`view.key !==
ViewKey.INDEX`) than the parent page
(`isViewDisplayableInNavigationMenu`), causing a mismatch where the
"System objects" link was visible but clicking it showed an empty list
- Aligned filtering in `SidePanelNewSidebarItemViewSystemPickerSubPage`
to use `isViewDisplayableInNavigationMenu` and exclude views already in
the workspace, consistent with the non-system picker page
## Summary
- **Split the `rolesPermissions` cache query into parallel queries** to
eliminate a 5-way LEFT JOIN cartesian product. A workspace with a role
having 5 objectPermissions × 4 permissionFlags × 124 fieldPermissions ×
5 RLP predicates × 5 RLP groups was returning 62,000 rows from just ~162
distinct records, causing 10s+ query timeouts on view updates.
- **Added missing `roleId` index on `permissionFlag` table** — the only
permission table without one, which was causing sequential scans on the
JOIN.
<img width="1511" height="825" alt="Capture d’écran 2026-04-09 à 14 19
52"
src="https://github.com/user-attachments/assets/cd6c533e-abc9-45f9-b5c1-5cb7341d5dfe"
/>
- Move GetChatThreads query from the generic metadata pipeline
(useLoadStaleMetadataEntities) into AgentChatThreadInitializationEffect,
gated by useHasPermissionFlag(AI_SETTINGS) — prevents "Entity does not
have permissions" errors for users whose role lacks AI access
- Fix initialization bug where isDefined(currentAIChatThread) always
returned true for the 'unknown-thread' sentinel value, preventing thread
initialization from ever running — replaced with isValidUuid check
## Summary
- Introduces a new `application-logs` core module with a driver pattern
(disabled/console/clickhouse) to capture and persist logic function
execution logs
- Adds a ClickHouse `applicationLog` table with per-line log storage,
30-day TTL, and `ORDER BY (workspaceId, timestamp, applicationId,
logicFunctionId)`
- Surfaces application logs in the existing frontend audit logs table as
a new "Application Logs" source with dedicated columns (Function,
Timestamp, Level, Message, Execution ID)
## Details
**Write path**: `LogicFunctionExecutorService.handleExecutionResult()`
parses the multi-line log string from driver output into individual `{
timestamp, level, message }` entries, generates an execution UUID, and
passes them to `ApplicationLogsService.writeLogs()` which delegates to
the configured driver.
**Driver pattern**: Follows the exception-handler module style (Symbol
injection token + `forRootAsync` dynamic module). Three drivers:
- `DISABLED` (default) — no-op, prevents information leaking
- `CONSOLE` — structured stdout logging with level-based `console.*`
calls
- `CLICKHOUSE` — inserts rows into the `applicationLog` ClickHouse table
**Read path**: Extends the existing event-logs module by adding
`APPLICATION_LOG` to the `EventLogTable` enum, table name mapping, and
normalization logic.
**Config**: New `APPLICATION_LOG_DRIVER_TYPE` environment variable
(default: `DISABLED`).
As per title. Replaced .json file with .lottie file - it is 10x smaller.
Illustrations folder went from 150mb to 5.9mb with compressed variants
of 3D models. The DracoLoader, GLTFLoader and some other logic logic is
repetitive across files, but Thomas is working on the same files at the
moment, so not touching too much to avoid conflicts with his PR. Once
he's done styling, we can extract shared logic into helpers.
Migrating some 1.21 migrations to the new pattern so it writes in the
history
When release in prod we will ahve to manually inject them as they have
already been run
It's mainly for self host so when releasing the 1.22 we will be able to
rely on the 1.21 history
<details>
<summary>
Reorganizing standard page layouts (see screenshot below)
</summary>
<img width="355" height="617" alt="Screenshot 2026-04-09 at 10 44 02"
src="https://github.com/user-attachments/assets/0b71d607-1d9d-48b6-8612-3de98d12d1fa"
/>
</details>
<details>
<summary>
Note: All standard objects now have a last system section for
createdBy/createdAt however since all new custom fields are added to the
last section they are set there (see 2nd screenshot below) until people
move them to dedicated section which can be counterintuitive. There is
an upcoming feature that allows user to set a default section and
visibility for newly added fields that should solve that
</summary>
<img width="360" height="849" alt="Screenshot 2026-04-09 at 10 43 44"
src="https://github.com/user-attachments/assets/127990d0-66b7-4b1d-ab00-8251e9707a04"
/>
</details>
Another note: Section are not translated at the moment
## Bug description
Headless command menu items were interpreted as non headless command
menu items and opened in the side panel.
https://github.com/user-attachments/assets/44d8b4d3-9ee5-4f12-9ff8-b3ed51e5b3b6
## Fix
https://github.com/user-attachments/assets/9d6eb900-9817-4ceb-87aa-547ad046a5a1
- Command menu items received via SSE lack the nested `frontComponent`
relation (only `frontComponentId` is present), causing
`item.frontComponent?.isHeadless` to always be undefined.
- Add `frontComponents` to the metadata store's stale entity loading
- Rewrite `commandMenuItemsSelector` to join `commandMenuItems` with
`frontComponents` by `frontComponentId`
## Summary
Added a Redis cache flush step in the breaking changes CI workflow to
prevent stale data contamination between consecutive server runs.
## Key Changes
- Added a new workflow step that executes `redis-cli FLUSHALL` between
the current branch server run and the main branch server run
- Includes error handling to log a warning if the Redis flush fails,
without blocking the workflow
- Added explanatory comments documenting why this step is necessary
## Implementation Details
The Redis flush is necessary because both the current branch and main
branch servers share the same Redis instance during CI testing. The
`CoreEntityCacheService` and `WorkspaceCacheService` persist cached
entities across process restarts, which can cause stale data from the
current branch server to contaminate the main branch server comparison.
This step ensures a clean cache state before switching branches.
https://claude.ai/code/session_01BVMacfAXDMNx5WAFtP7GgW
Co-authored-by: Claude <noreply@anthropic.com>
- disable publish with existing version
- disable installation of app version already installed
---------
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
When an object is created or enabled we create a navigation command menu
item to that object. And when the object is disabled or deleted, we
remove this command menu item.
## Summary
This PR refactors the EventRow component structure by extracting shared
types and styled components into a dedicated base file, improving code
organization and reducing duplication.
## Key Changes
- Created new `EventRowBase.tsx` file to centralize shared EventRow
utilities
- Moved `EventRowDynamicComponentProps` interface from
`EventRowDynamicComponent.tsx` to `EventRowBase.tsx`
- Moved styled components `StyledEventRowItemColumn` and
`StyledEventRowItemAction` from `EventRowDynamicComponent.tsx` to
`EventRowBase.tsx`
- Updated all imports across 6 files to reference the new `EventRowBase`
module instead of `EventRowDynamicComponent`
- Simplified `EventRowDynamicComponent.tsx` to re-export types and
styles from `EventRowBase` for backward compatibility
## Files Updated
- `EventRowDynamicComponent.tsx` - Simplified to import and re-export
from EventRowBase
- `EventRowActivity.tsx` - Updated import path
- `EventRowCalendarEvent.tsx` - Updated import path
- `EventRowMainObject.tsx` - Updated import path
- `EventRowMainObjectUpdated.tsx` - Updated import path
- `EventRowMessage.tsx` - Updated import path
## Benefits
- Better separation of concerns with shared base utilities in a
dedicated module
- Reduced circular dependency risks
- Clearer module structure for future EventRow-related components
- Maintains backward compatibility through re-exports
https://claude.ai/code/session_011EyhBJ56RGuZHxBVWwzEQy
---------
Co-authored-by: Claude <noreply@anthropic.com>
Bumps [tsdav](https://github.com/natelindev/tsdav) from 2.1.5 to 2.1.8.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/natelindev/tsdav/releases">tsdav's
releases</a>.</em></p>
<blockquote>
<h2>v2.1.8</h2>
<h5>improvements</h5>
<ul>
<li>fixed <a
href="https://redirect.github.com/natelindev/tsdav/issues/272">#272</a>
malformed expand request in fetchCalendarObjects</li>
<li>optimized fetchCalendarObjects to reduce redundant requests when
expand is true</li>
<li>added Bearer auth support for token-based providers (e.g., Nextcloud
OIDC)</li>
<li>added fetch overrides across account, request, and collection
helpers for custom transports</li>
<li>prefer native fetch in Cloudflare Workers to avoid cross-fetch
incompatibilities</li>
<li>collectionQuery now rejects on non-OK responses instead of returning
empty arrays</li>
<li>fetchVCards filters out collection URLs to avoid empty/invalid
entries (Radicale-compatible)</li>
<li>docs updates for iCal feed import, providers, and custom transport
guidance</li>
</ul>
<h2>v2.1.7</h2>
<h2>What's Changed</h2>
<ul>
<li>Fix: Handle empty body responses in davRequest to prevent crash by
<a href="https://github.com/hsvtslv"><code>@hsvtslv</code></a> in <a
href="https://redirect.github.com/natelindev/tsdav/pull/267">natelindev/tsdav#267</a></li>
<li>Fix npm authentication in auto-release workflow by <a
href="https://github.com/Copilot"><code>@Copilot</code></a> in <a
href="https://redirect.github.com/natelindev/tsdav/pull/271">natelindev/tsdav#271</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/hsvtslv"><code>@hsvtslv</code></a> made
their first contribution in <a
href="https://redirect.github.com/natelindev/tsdav/pull/267">natelindev/tsdav#267</a></li>
<li><a href="https://github.com/Copilot"><code>@Copilot</code></a> made
their first contribution in <a
href="https://redirect.github.com/natelindev/tsdav/pull/271">natelindev/tsdav#271</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/natelindev/tsdav/compare/v2.1.6...v2.1.7">https://github.com/natelindev/tsdav/compare/v2.1.6...v2.1.7</a></p>
<h2>v2.1.6</h2>
<h5>improvements</h5>
<ul>
<li>added AGENTS.md</li>
<li>updated dependencies</li>
<li>fixed docs build issues</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/natelindev/tsdav/blob/main/CHANGELOG.md">tsdav's
changelog</a>.</em></p>
<blockquote>
<h2>v2.1.8</h2>
<h5>improvements</h5>
<ul>
<li>fixed <a
href="https://redirect.github.com/natelindev/tsdav/issues/272">#272</a>
malformed expand request in fetchCalendarObjects</li>
<li>optimized fetchCalendarObjects to reduce redundant requests when
expand is true</li>
<li>added Bearer auth support for token-based providers (e.g., Nextcloud
OIDC)</li>
<li>added fetch overrides across account, request, and collection
helpers for custom transports</li>
<li>prefer native fetch in Cloudflare Workers to avoid cross-fetch
incompatibilities</li>
<li>collectionQuery now rejects on non-OK responses instead of returning
empty arrays</li>
<li>fetchVCards filters out collection URLs to avoid empty/invalid
entries (Radicale-compatible)</li>
<li>docs updates for iCal feed import, providers, and custom transport
guidance</li>
</ul>
<h2>v2.1.7</h2>
<h5>improvements</h5>
<ul>
<li>docs: add browser usage example and clarify class-based login</li>
<li>docs: add Nextcloud connection guidance and Apple password
reference</li>
</ul>
<h2>v2.1.6</h2>
<h5>improvements</h5>
<ul>
<li>added AGENTS.md</li>
<li>updated dependencies</li>
<li>fixed docs build issues</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1800c16c8e"><code>1800c16</code></a>
Merge pull request <a
href="https://redirect.github.com/natelindev/tsdav/issues/273">#273</a>
from natelindev/fix/fix-all-issues</li>
<li><a
href="ad7782b7a0"><code>ad7782b</code></a>
fix ci</li>
<li><a
href="c0275001ba"><code>c027500</code></a>
fix ci</li>
<li><a
href="2f818c06f8"><code>2f818c0</code></a>
chore: prepare v2.1.8 release</li>
<li><a
href="bfab3ac8eb"><code>bfab3ac</code></a>
fix: prefer global fetch for Cloudflare Workers compatibility (Issue
215)</li>
<li><a
href="1942d42d9b"><code>1942d42</code></a>
fix: support custom fetch override in DAVClient for Electron/CORS (<a
href="https://redirect.github.com/natelindev/tsdav/issues/216">#216</a>)</li>
<li><a
href="3d52838fcd"><code>3d52838</code></a>
docs: add guide for importing iCal feeds (Airbnb, Booking)</li>
<li><a
href="318b0d3486"><code>318b0d3</code></a>
fix: ensure collectionQuery throws on gateway errors and status >=
400</li>
<li><a
href="aa9445a3aa"><code>aa9445a</code></a>
feat: support custom fetch override for KaiOS mozSystem support (Issue
220)</li>
<li><a
href="ce48770780"><code>ce48770</code></a>
fix: address issues with Radicale and CardDAV vCard fetching (Issue
239)</li>
<li>Additional commits viewable in <a
href="https://github.com/natelindev/tsdav/compare/v2.1.5...v2.1.8">compare
view</a></li>
</ul>
</details>
<details>
<summary>Maintainer changes</summary>
<p>This version was pushed to npm by [GitHub Actions](<a
href="https://www.npmjs.com/~GitHub">https://www.npmjs.com/~GitHub</a>
Actions), a new releaser for tsdav since your current version.</p>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Automated daily sync of `ai-providers.json` from
[models.dev](https://models.dev).
This PR updates pricing, context windows, and model availability based
on the latest data.
New models meeting inclusion criteria (tool calling, pricing data,
context limits) are added automatically.
Deprecated models are detected based on cost-efficiency within the same
model family.
**Please review before merging** — verify no critical models were
incorrectly deprecated.
Co-authored-by: FelixMalfait <6399865+FelixMalfait@users.noreply.github.com>
## Summary
- `buildBedrockProvider` ignored `authType: "role"` and only honored
static `accessKeyId`/`secretAccessKey` pairs, so deployments relying on
IRSA (e.g. EKS pods with `AWS_WEB_IDENTITY_TOKEN_FILE` + `AWS_ROLE_ARN`)
had no way to authenticate — `@ai-sdk/amazon-bedrock` does not walk the
AWS default credential chain on its own.
- When `authType === 'role'`, wire `fromNodeProviderChain()` from
`@aws-sdk/credential-providers` (already a server dependency, used by
the S3 and logic-function drivers) into `createAmazonBedrock` so IRSA
and other ambient AWS credentials resolve correctly.
- The static-credentials path is unchanged for `authType !== 'role'`.
## Test plan
- [ ] Deploy a server pod with IRSA + an `ai-models-config` entry using
`"authType": "role"` and verify Bedrock calls succeed (no `Could not
load credentials from any providers` error).
- [ ] Verify static-credentials providers (`accessKeyId` +
`secretAccessKey`) still work as before.
- [ ] `npx nx lint:diff-with-main twenty-server` passes.
- [ ] `npx nx typecheck twenty-server` passes.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
Fixes#19422.
Self-hosted users hitting "No AI models are available" after configuring
API keys via the admin panel were victims of a stale
`AiModelRegistryService` cache. Only the four `addAiProvider` /
`removeAiProvider` / `addModelToProvider` / `removeModelFromProvider`
mutations called `refreshRegistry()` — setting an API key through
`set/update/deleteDatabaseConfigVariable` left the registry pointing at
the pre-mutation provider state.
Rather than patch each mutation site (and re-introduce the same class of
bug on the next one), the registry now invalidates lazily based on the
LLM config-group hash, mirroring the pattern `WebSearchDriverFactory`
already uses via `DriverFactoryBase`. Any mutation to an LLM-tagged
config variable is picked up automatically on the next read — callers
never have to remember to refresh.
- Extracted `getConfigGroupHash` into a shared util reused by both
`DriverFactoryBase` and `AiModelRegistryService` (and switched the hash
to `JSON.stringify` so object-typed config vars like `AI_PROVIDERS`
actually contribute meaningfully).
- `AiModelRegistryService` gates all internal `Map` access behind
private getters that call `ensureFresh()`, so future read paths can't
accidentally observe stale state. Build path uses underscored backing
fields directly to avoid recursing.
- Dropped the now-redundant `refreshRegistry()` public method and its
four call sites in `admin-panel.resolver.ts`.
## Test plan
- [x] `nx typecheck twenty-server` passes
- [x] Existing admin-panel + ai-models specs pass (79 tests)
- [ ] Manual: on a self-hosted instance, set `OPENAI_API_KEY` (or
another provider key) via the admin panel `setDatabaseConfigVariable`
mutation and confirm AI features work without a server restart
- [ ] Manual: existing flows (`addAiProvider`, `addModelToProvider`,
etc.) still pick up new providers on the next read
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Context
- Add "Reset to default" for page layout tabs and widgets — backend
mutation resets overrides, reactivates deactivated children, and deletes
custom children
- Fix override write bug where mutating an overridable property (e.g.
widget title) on a standard-app entity incorrectly overwrote the base
column instead of writing to the overrides JSONB —
PageLayoutUpdateService now uses resolveFlatEntityOverridableProperties
for accurate diffing and routes properties through
sanitizeOverridableEntityInput
- Deprecate isOverridden - We need more time to think about this
feature. Currently this adds too much complexity for a very small
benefit
https://github.com/user-attachments/assets/a84546c8-1e15-4d9e-a489-0825cf8b8ed2
## Summary
- When `WEB_SEARCH_DRIVER` is set to a non-native driver (e.g. `EXA`),
the `web_search` action tool was only listed in the tool catalog and had
to be discovered via `learn_tools` before use. Worse,
`system-prompt-builder` explicitly instructed the model **not** to call
`web_search` whenever it wasn't in the preloaded set, so even setting
`EXA_API_KEY` correctly resulted in the model never calling Exa.
- Fix: preload `web_search` from the registry alongside the other action
tools when `shouldUseNativeSearch()` is false, mirroring how native
provider search is already injected into `directTools`. The system
prompt builder picks this up automatically and advertises `web_search`
as a ready-to-use tool.
- Unrelated: pass `isSystemBuild: true` to the messageThread upgrade
command's migration build.
## Test plan
- [ ] With `WEB_SEARCH_DRIVER=EXA` and `EXA_API_KEY` set, ask the chat
agent for current/news information and verify it calls `web_search`
directly (not via `learn_tools` / `execute_tool`) and that Exa is hit.
- [ ] With `WEB_SEARCH_PREFER_NATIVE=true`, verify native provider
search is still used and the action tool is not preloaded.
- [ ] With `WEB_SEARCH_DRIVER=DISABLED`, verify behavior is unchanged
(`shouldUseNativeSearch()` returns true → no Exa preload).
- [ ] Run the 1-21 fix-message-thread-view-and-label-identifier upgrade
command on a workspace and confirm the migration builds/runs as a system
build.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
# Introduction
Persist in the db the registry computed name that contains
version_classname_timestamp to be consistent with the instance command
pattern too
Fix record table not refreshing after merging records
The record table's virtualization state persists across navigations, so
when the table remounts after a merge redirect, none of the existing
reload conditions were satisfied and the table rendered stale data.
Add an isInitializedOnMount local state to
RecordTableVirtualizedInitialDataLoadEffect that guarantees a fresh
triggerInitialRecordTableDataLoad on every mount, acting as a fallback
when no other reload condition matches.
## Summary
- Adds `deploy-twenty-app` and `install-twenty-app` composite actions to
`.github/actions/` so app repos can reference them remotely — same
pattern as `spawn-twenty-app-dev-test` for CI
- Updates `cd.yml` in template, hello-world, and postcard to use
`twentyhq/twenty/.github/actions/deploy-twenty-app@main` /
`install-twenty-app@main` instead of local `./.github/actions/` copies
- Removes the 6 local action files that were duplicated across template
and example apps
**Before** (each app repo carried its own action copies):
```yaml
uses: ./.github/actions/deploy
```
**After** (centralized, like CI):
```yaml
uses: twentyhq/twenty/.github/actions/deploy-twenty-app@main
```
Made with [Cursor](https://cursor.com)
## Summary
- Replaces the `spawn-twenty-app-dev-test` Docker action with native
GitHub Actions services (`postgres:18` + `redis`) and direct server
startup (`npx nx start:ci twenty-server`)
- Aligns with the pattern already used by `ci-sdk.yaml` for e2e tests
- Removes dependency on the `twenty-app-dev` Docker image for CI
Updated workflows:
- `ci-example-app-postcard`
- `ci-example-app-hello-world`
- `ci-create-app-e2e-minimal`
- `ci-create-app-e2e-hello-world`
- `ci-create-app-e2e-postcard`
Server setup pattern:
1. Postgres 18 + Redis as job services
2. `CREATE DATABASE "test"` via psql
3. `npx nx run twenty-server:database:reset` (migrations + seed)
4. `nohup npx nx start:ci twenty-server &`
5. `npx wait-on http://localhost:3000/healthz`
Made with [Cursor](https://cursor.com)
## Summary
- replace the home hero visual with an interactive multi-view experience
for table, kanban, workflow, and dashboard states
- add the supporting hero data model, page normalizers, loaders, chips,
and sales dashboard assets
- update related website illustration components and ignore local Claude
worktrees in `.gitignore`
## Testing
- Not run (not requested)
This PR moves illustrations from sections to a separate folder of their
own and accesses which illustration to load on a specific page using a
registry.
The reason for this change is that we want to be able to independently
style each illustration since shared styles being applied to nine
illustrations of ThreeCards, for example, does not get us the eventual
output visually that we're after because the underlying assets have
different lighting, angles etc.
Once we're at a position where we know what can be reused, I will
refactor. For now, Thomas would to take illustration styling to try and
implement what he has in mind, so isolating these files for him to work
without breaking anything.
Replace generic JSON scalar with a typed GraphQL union
CommandMenuItemPayload (PathNavigationPayload |
ObjectMetadataNavigationPayload) for the CommandMenuItem.payload field
## Summary
- Adds reusable composite GitHub Actions for Twenty app deployment:
- `.github/actions/deploy` — builds and deploys to a remote instance
(`api-url`, `api-key` inputs)
- `.github/actions/install` — installs/upgrades on a specific workspace
(`api-url`, `api-key` inputs)
- Adds a `cd.yml` CD workflow that calls both actions in sequence. The
workflow:
- Deploys on push to `main`
- Can be triggered from a PR by adding a `deploy` label
- Configures a named remote via `TWENTY_DEPLOY_URL` env var and
`TWENTY_DEPLOY_API_KEY` secret
- Applied to: `create-twenty-app` template, `postcard` example,
`hello-world` example
- Updates the `spawn-twenty-app-dev-test` action ref from
`@feature/sdk-config-file-source-of-truth` to `@main` in all `ci.yml`
files
# Introduction
This PR introduces the slow instance commands pattern, that allow
migrating data in prior of the schema migration, that would fail if not.
Slow instance commands runs after the fast instance commands and before
the workspace commands.
On twenty instance that do not has any active or suspended workspace the
data migration part is skipped but the migration still runs, especially
for fresh installs
We were previously hacking through typeorm transaction system to gain
such granularity using save points:
```ts
export class AddPayloadToCommandMenuItem1775129635528
implements MigrationInterface
{
name = 'AddPayloadToCommandMenuItem1775129635528';
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(
`ALTER TABLE "core"."commandMenuItem" ADD "payload" jsonb`,
);
const savepointName =
'sp_add_payload_check_constraint_to_command_menu_item';
try {
await queryRunner.query(`SAVEPOINT ${savepointName}`);
await addPayloadCheckConstraintToCommandMenuItem(queryRunner);
await queryRunner.query(`RELEASE SAVEPOINT ${savepointName}`);
} catch (e) {
try {
await queryRunner.query(`ROLLBACK TO SAVEPOINT ${savepointName}`);
await queryRunner.query(`RELEASE SAVEPOINT ${savepointName}`);
} catch (rollbackError) {
// oxlint-disable-next-line no-console
console.error(
'Failed to rollback to savepoint in AddPayloadToCommandMenuItem1775129635528',
rollbackError,
);
throw rollbackError;
}
// oxlint-disable-next-line no-console
console.error(
'Swallowing AddPayloadToCommandMenuItem1775129635528 error',
e,
);
}
}
```
It was afterwards re-applied within an workspace commands, it was hacky
and missleading for the self host having false positive in logs
## New pattern
Generate the slow instance command
```
npx nx database:migrate:generate twenty-server -- --name add-foo-bar-columns --type slow
```
```ts
import { DataSource, QueryRunner } from 'typeorm';
import { RegisteredInstanceCommand } from 'src/engine/core-modules/upgrade/decorators/registered-instance-command.decorator';
import { SlowInstanceCommand } from 'src/engine/core-modules/upgrade/interfaces/slow-instance-command.interface';
@RegisteredInstanceCommand('1.21.0', 1775640902366, { type: 'slow' })
export class AddPrastoinColToWorkspaceSlowInstanceCommand implements SlowInstanceCommand {
async runDataMigration(dataSource: DataSource): Promise<void> {
// TODO: implement data backfill before the DDL migration
}
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query('ALTER TABLE "core"."workspace" ADD "prastoin" character varying');
}
public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query('ALTER TABLE "core"."workspace" DROP COLUMN "prastoin"');
}
}
```
### Why `run-instance-commands` and `upgrade` remain separate commands
These two commands serve fundamentally different purposes with
incompatible scoping semantics:
The run-instance-commands iterates over all the legacy typeorm and the
instance commands of all versions, used for database init and so on. we
could be centralizing both but the readability tradeoff isn't worth it
In the future thanks to the cross-version pattern we will be able to
centralize them but not right now
Please note that by default the `run-instance-commands` only run the
fast instance commands which is expected for our cloud prod CD
## Context
SSE metadata events for overridable entities (viewField, viewFieldGroup,
pageLayoutWidget, pageLayoutTab) were sending raw flat entity data
without resolving overrides into base properties or filtering isActive:
false entities. This caused the frontend to display stale/incorrect
values (e.g., a hidden viewField still appearing visible).
## Implementation
Add a sanitization step in
MetadataEventPublisher.enrichMetadataEventBatch that resolves overrides
and converts isActive transitions into the appropriate event types
(deactivated -> delete, reactivated -> create), matching what GraphQL
resolvers already do
Automated daily sync of `ai-providers.json` from
[models.dev](https://models.dev).
This PR updates pricing, context windows, and model availability based
on the latest data.
New models meeting inclusion criteria (tool calling, pricing data,
context limits) are added automatically.
Deprecated models are detected based on cost-efficiency within the same
model family.
**Please review before merging** — verify no critical models were
incorrectly deprecated.
Co-authored-by: FelixMalfait <6399865+FelixMalfait@users.noreply.github.com>
## Summary
This PR adds support for the **CLF (Unidad de Fomento)** currency code
across the application.
## Changes
- Added `CLF` to the supported currency list
- Updated validation logic to recognize CLF as a valid currency
- Adjusted formatting and handling where applicable
## Motivation
CLF is a widely used unit of account in Chile, commonly used for
financial operations such as real estate, contracts, and indexed
payments.
Supporting CLF improves localization and enables better adoption of
Twenty CRM in the Chilean market.
## Files Modified
- Updated 3 files to integrate CLF support (currency configuration,
validation, and related logic)
## Testing
- Verified that CLF can be selected and processed correctly
- Confirmed no regression in existing currency behavior
## Summary
- **Config as source of truth**: `~/.twenty/config.json` is now the
single source of truth for SDK authentication — env var fallbacks have
been removed from the config resolution chain.
- **Test instance support**: `twenty server start --test` spins up a
dedicated Docker instance on port 2021 with its own config
(`config.test.json`), so integration tests don't interfere with the dev
environment.
- **API key auth for marketplace**: Removed `UserAuthGuard` from
`MarketplaceResolver` so API key tokens (workspace-scoped) can call
`installMarketplaceApp`.
- **CI for example apps**: Added monorepo CI workflows for `hello-world`
and `postcard` example apps to catch regressions.
- **Simplified CI**: All `ci-create-app-e2e` and example app workflows
now use a shared `spawn-twenty-app-dev-test` action (Docker-based)
instead of building the server from source. Consolidated auth env vars
to `TWENTY_API_URL` + `TWENTY_API_KEY`.
- **Template publishing fix**: `create-twenty-app` template now
correctly preserves `.github/` and `.gitignore` through npm publish
(stored without leading dot, renamed after copy).
## Test plan
- [x] CI SDK (lint, typecheck, unit, integration, e2e) — all green
- [x] CI Example App Hello World — green
- [x] CI Example App Postcard — green
- [x] CI Create App E2E minimal — green
- [x] CI Front, CI Server, CI Shared — green
## Summary
Adds `upgrade:1-21:fix-message-thread-view-and-label-identifier` to
retroactively apply two messageThread changes from #19351 that never
propagated to existing workspaces (because
`synchronizeTwentyStandardApplicationOrThrow` only runs at workspace
creation):
1. **`allMessageThreads` view fields**: delete-and-recreate all view
fields for the standard view from the current twenty-standard
definition, which adds the new `subject` and `updatedAt` columns. Same
pattern as the FIELDS_WIDGET sync in
`1-21-workspace-command-1775500005000-backfill-page-layouts-and-fields-widget-view-fields`.
2. **`messageThread.labelIdentifierFieldMetadataId`**: repointed to the
`subject` field via `validateBuildAndRunWorkspaceMigration`. The
migration runner already resolves
`labelIdentifierFieldMetadataUniversalIdentifier` → id in
`update-object-action-handler`, so this goes through the standard flow
and properly invalidates caches.
Both fixes share a single `validateBuildAndRunWorkspaceMigration` call
(one `viewField` op, one `objectMetadata` update op). Idempotent — skips
if nothing to do.
Depends on `upgrade:1-21:backfill-message-thread-subject` having run
first (the label identifier fix needs the `subject` field to exist; it
logs a warning and skips that part otherwise).
## Test plan
- [ ] Legacy workspace: run \`backfill-message-thread-subject\`, then
this command, then verify:
- the \`allMessageThreads\` view shows the new \`subject\` and
\`updatedAt\` columns
- messageThread records display their \`subject\` as the record label
- [ ] Re-run the command: no-op, logs \`Nothing to fix\`
- [ ] \`--dry-run\` logs planned changes without applying them
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
# Introduction
Migrating the workspace commands to the decorator version + timestamp
listing as for the instance commands
We've now been able to remove the upgrade command abstraction where we
needed to import all modules and order them
Now they're dynamically retrieved at upgrade runtime, sorted by
timestamp
## Instance and workspace commands name
The name is computed from the command metadata `version` `className` and
`timestamp` we have a duplicate validation at module init from the
unified registry
# Size issue
```
Yarn install Lambda failed: {"errorType":"Error","errorMessage":"yarn install failed: ➤ Y/tmp/3bac8baafe6354db414389434726b02c/nodejs/node_modules/tar ENOSPC: no space left on device, write","➤ YN0000: └ Completed in 3s 57ms","➤ YN0000: · Failed with errors in 11s 684ms",""," at runYarnInstall (file:///var/task/index.mjs:48:11)"," at process.processTicksAndRejections (node:internal/process/task_queues:105:5)"," at async Runtime.handler (file:///var/task/index.mjs:119:5)"]}
```
# target esnext
setting the same target for each logic function build funnels
- sdk
- local driver
- lambda driver
## Summary
- Reduce the plan title size from `md` to `xs`
- Switch the pricing card header layout from grid to flex for tighter
title/price control
- Tighten the title line-height and add a `black/60` color override for
the `/month...` suffix
- Add a 4px gap between the price amount and suffix
- Reduce the illustration height to 80px and shift it slightly right on
desktop
## Testing
- Not run (not requested)
Co-authored-by: Charles Bochet <charles@twenty.com>
- Adds a payload JSON column to `CommandMenuItem` and introduces a
unified `NAVIGATION` engine component key that replaces all individual
GO_TO_* keys
- Navigation commands now use the payload to determine their target
(either an objectMetadataItemId or a path), making navigation commands
dynamic and eliminating the need for a hardcoded engine key per object
- Includes a 1.21 upgrade command (refactor-navigation-commands) that
migrates existing GO_TO_* items to NAVIGATION items with the appropriate
payload, and applies a CHECK constraint enforcing payload coherence
https://github.com/user-attachments/assets/4d305ba2-ae0b-4556-bb0e-e9d899777350
TODO: In a second PR, create the sync between object metadata items and
the navigation command menu items
- Object metadata item created or enabled -> Create navigation command
- Object metadata item deleted or disabled -> Delete associated
navigation command
In another PR:
- Allow `label`, `shortLabel` and `icon` to resolve the
`navigateToObjectMetadataItem` dynamically in their interpolation
instead of being hardcoded in the command menu item
- Make the icon dynamic in the command menu items as the label so that
we can resolve ${navigateToObjectMetadataItem.icon} at runTime -> This
way we won't need to keep update the command menu item icon when we
update the objectMetadataItem icon
**Optimize workflow cron jobs: partition workspaces and use raw
queries**
- Split all 3 workflow cron jobs (WorkflowRunEnqueueCronJob,
WorkflowHandleStaledRunsCronJob, WorkflowCleanWorkflowRunsCronJob) to
process only 1/10th of workspaces per invocation using minute-based
partitioning, reducing per-run load
- Replace ORM repository + workspace context loading with raw SQL
queries in WorkflowRunEnqueueCronJob and
WorkflowHandleStaledRunsCronJob, avoiding costly cache/metadata
hydration for a simple existence check
## Context
Due to the chosen strategy for "Reset to default" feature for page
layouts. Those overridable entities need to be associated to the
Standard app to work properly (there is no "Default" state for custom
entities). Until we find a better implementation, I'm changing the
backfill command to reflect that
## Summary
- The 1-21 \`upgrade:1-21:backfill-message-thread-subject\` command
assumed the legacy \`sync-metadata\` flow would create the new
\`messageThread.subject\` field metadata and column on existing
workspaces. That sync was removed, so the column was never added and the
backfill silently skipped.
- The command now ensures the field exists by computing the standard
\`messageThread.subject\` flat field from the twenty-standard
application and running it through
\`WorkspaceMigrationValidateBuildAndRunService\` (same pattern used by
the page-layout / command-menu-item backfills). This creates both the
field metadata row and the workspace schema column.
- After ensuring the field, the existing \`UPDATE messageThread SET
subject = ...\` runs as before.
## Test plan
- [ ] On a workspace with no \`subject\` column on \`messageThread\`,
run \`yarn command:prod upgrade:1-21:backfill-message-thread-subject\`
and confirm:
- the field metadata row is created in \`core.\"fieldMetadata\"\`
- the \`subject\` column is created on \`workspace_<id>.messageThread\`
- existing message threads are backfilled from the most recent message
- [ ] Re-run on the same workspace and confirm it is a no-op (field
already exists, no rows to update)
- [ ] Run on a workspace that already has the column but \`NULL\`
subjects and confirm only the backfill runs
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
# Introduction
As for the instance commands we want to keep a track of what has been
run for the workspace commands
Note that the history will be updated only when the workspace command
has been run through the upgrade directly and not when run atomically
## What's next
Later we will use this history in order to determine the current
workspace's version and instance's version getting rid of the version in
database, that will be the last stone
## Summary
Fixes two bugs in
`MessagingMessageService.saveMessagesWithinTransaction`.
### Bug 1 — `ON CONFLICT DO UPDATE command cannot affect row a second
time`
Reported in production logs from `MessagingMessagesImportService`.
Postgres rejects a single `INSERT … ON CONFLICT DO UPDATE` when the same
conflict target appears twice in the values list, and that's exactly
what was happening to `messageThread`.
Where it came from: #19351 added the message-thread subject refresh
feature and, in doing so, switched the existing thread `insert` to a
bulk `upsert(['id'])` over a list built by concatenating two sources:
```ts
const threadsToUpsert = [
...messageThreadsToCreate, // brand-new thread rows
...threadSubjectUpdates entries, // subject refreshes for existing threads
];
await messageThreadRepository.upsert(threadsToUpsert, ['id'], txManager);
```
Each list is internally unique, but they are **not disjoint**.
`enrichMessageAccumulatorWithMessageThreadToCreate`, when it sees two
messages in the same batch sharing a brand-new thread external id,
copies the first sibling's freshly-minted thread id into the second
sibling's `existingThreadInDB`. The subject-update gate later in the
loop then trusts that field and queues a subject refresh for that id —
which is also already in `messageThreadsToCreate`. Same id, same
statement, two rows → Postgres aborts the transaction and the import
retries forever on the same batch.
**Fix:** stop merging the two lists. Issue creates and subject updates
as two separate statements within the same transaction:
```ts
if (messageThreadsToCreate.length > 0) {
await messageThreadRepository.insert(messageThreadsToCreate, txManager);
}
if (threadSubjectUpdates.size > 0) {
await messageThreadRepository.upsert(
Array.from(threadSubjectUpdates.entries()).map(([id, { subject }]) => ({ id, subject })),
['id'],
txManager,
);
}
```
This is closer to the pre-#19351 shape (`insert` for new rows) and
side-steps the duplicate-row constraint entirely: each statement is
internally unique (creates use freshly minted UUIDs; updates are keyed
by a `Map<id, …>`), and within the same transaction Postgres happily
applies a subject update to a row inserted by a previous statement.
### Bug 2 —
`enrichMessageAccumulatorWithExistingMessageChannelMessageAssociations`
clobbers the accumulator
Independent latent bug spotted while tracing the flow. The helper did:
```ts
if (existingMessageChannelMessageAssociation) {
messageAccumulatorMap.set(message.externalId, {
existingMessageInDB: existingMessage,
existingMessageChannelMessageAssociationInDB: existingMessageChannelMessageAssociation,
});
}
```
i.e. it **replaces** the accumulator object, dropping the
`existingThreadInDB` set just before by
`enrichMessageAccumulatorWithExistingMessageThreadIds`.
The branch only fires when re-encountering a message that's already been
fully synced on this channel (matched on `headerMessageId` AND already
has an association row) — i.e. routinely on Gmail/IMAP incremental syncs
whenever the connector re-delivers an existing message (label change,
read/unread, archive, full-resync after error, …).
When it fires, the next enrichment step sees `existingThreadInDB` as
`undefined`, falls into the "create a new thread" branch, mints a fresh
`threadToCreate`, but the main loop never queues a `messageToCreate` or
association for it (because both `existingMessageInDB` and the existing
association are still set). Net effect: **one orphan `messageThread` row
inserted per re-encountered message, with nothing referencing it.**
The existing message in the DB keeps pointing at its real thread, so
this is invisible to users — no thread fragmentation, no UI symptoms, no
error logs. Just slow accumulation of orphan thread rows that no query
joins onto. Probably worth running
```sql
SELECT COUNT(*)
FROM "messageThread" mt
WHERE NOT EXISTS (
SELECT 1 FROM message m WHERE m."messageThreadId" = mt.id
);
```
on a busy production workspace once this lands to size whether a cleanup
migration is warranted.
**Fix:** mutate the existing accumulator in place instead of replacing
it.
## Test plan
- [x] `oxlint --type-aware` clean on touched file
- [x] `prettier` clean
🤖 Generated with [Claude Code](https://claude.com/claude-code)
## Summary
Fixes#19377
- **Redis health**: The hit rate calculation divides by zero when both
`keyspace_hits` and `keyspace_misses` are `"0"` (common on fresh
instances). The string `"0"` is truthy so the guard
`statsData.keyspace_hits ? ...` doesn't catch this case, resulting in
`0/0 = NaN`. Fixed by computing the total first and checking it's a
valid non-zero number.
- **Database health**: The cache hit ratio query returns `null` when
`pg_statio_user_tables` is empty (no user tables). `parseFloat(null)` →
`NaN`. Fixed by adding a null check.
## Test plan
- [ ] Verify health indicators display correctly on a fresh instance
with no Redis keyspace activity
- [ ] Verify health indicators display correctly on a database with no
user tables
- [ ] Existing tests in `redis.health.spec.ts` still pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: easonysliu <easonysliu@tencent.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Charles Bochet <charles@twenty.com>
# Introduction
Now only using typeorm to generate migrations up and down statement
We handle and maintain our own migration table history
## What's new
Now all the instance commands will live within the same module and
folder than the upgrade commands
Sequentiality comes from the timestamp located in the filename
Same sequentiality also applies to the workspace commands in the future,
for the moment still expected a as code explicit declaration
( below screen is an example see below section )
<img width="1382" height="634" alt="image"
src="https://github.com/user-attachments/assets/5610a246-4eae-485e-99f4-98fb89ad5ac8"
/>
## Existing 1.21 migrations
We won't start following this pattern in 1.21 yet at least not with the
migration that has already been released as typeorm migrations in cloud
production as they would rerun
## Small duplication
Duplicating the legacy typeorm and instance commands run in the
`run-instance-commands` to avoid any merge of interest for the moment
## Concurrency
Not handling any run in parrallel of the upgrade for the moment
## Summary
- `upgrade:1-21:backfill-message-thread-subject` was failing on every
workspace with `Method not allowed because permissions are not
implemented at datasource level`.
- The global workspace datasource gates raw `query()` calls behind
`shouldBypassPermissionChecks`. Both the column-existence probe and the
UPDATE in this command now pass that flag, matching the pattern used by
the other 1.20/1.21 upgrade commands.
## Test plan
- [ ] Re-run `yarn command:prod
upgrade:1-21:backfill-message-thread-subject` and confirm all workspaces
complete without the permissions error
- [ ] Spot-check a workspace to confirm `messageThread.subject` is
backfilled from the most recent message
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
Clicking **Compose** in the emails tab without a connected account
redirects to the New Account settings page, but the settings nav drawer
was left in its previous (collapsed / "main") state — producing a
visibly half-broken transition.
### Root cause
The "enter settings" preparation (memorize previous URL + drawer state,
expand the desktop drawer, switch the mobile drawer to `'settings'`) was
duplicated **inline in three different places**:
- `NavigationDrawerOtherSection.handleSettingsClick`
- `MultiWorkspaceDropdownDefaultComponents` Settings link
- Implicitly expected (but missing) from every `useNavigateSettings`
caller
Every other entry point — `ComposeEmailButton`, `ComposeEmailCommand`,
`AIChatCreditsExhaustedMessage`, several workflow/role components — just
called `navigateSettings(...)` and skipped the prep entirely,
reproducing the bug.
### Fix
- Move the full prep into `useOpenSettingsMenu`, with a
`useIsSettingsPage()` short-circuit so internal navigation doesn't
clobber the memorized return target.
- `useNavigateSettings` delegates to `openSettingsMenu()` before
navigating — fixing every caller in one place.
- Collapse the duplicated inline logic in `NavigationDrawerOtherSection`
and `MultiWorkspaceDropdownDefaultComponents` to a single call.
Net **−9 lines**, single source of truth, no behavior change for the
existing happy paths.
## Test plan
- [x] \`nx typecheck twenty-front\` passes
- [x] \`oxlint\` + \`prettier\` clean on all 4 changed files
- [x] Existing \`useNavigateSettings\` tests pass (4/4)
- [ ] Manual: Compose button on a Person/Company/Opportunity emails tab
with no connected account → settings drawer renders fully expanded,
"Exit Settings" returns to the record
- [ ] Manual: "Settings" entry in the main nav drawer still works
(return path memorized)
- [ ] Manual: "Settings" entry in the multi-workspace dropdown still
works, and right-click → open in new tab still works (kept
\`UndecoratedLink\`)
- [ ] Manual: Navigating between settings pages does not overwrite the
memorized return URL
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Fixes#19070
The `neq` operator in `compute-where-condition-parts.ts` uses `OR` where
it should use `AND` when handling null-equivalent values.
Currently generates:
```sql
field != '' OR field IS NOT NULL
```
For a row where `field = ''`:
- `'' != ''` = false
- `'' IS NOT NULL` = true
- `false OR true` = true -- row incorrectly passes the filter
The `eq` operator correctly uses `OR field IS NULL` because it's
additive (match value or its null equivalent). By De Morgan's law, the
negation `neq` needs `AND field IS NOT NULL` -- exclude if the value
doesn't match AND is not a null equivalent.
With the fix:
```sql
field != '' AND field IS NOT NULL
```
- `'' != ''` = false, `'' IS NOT NULL` = true, `false AND true` = false
-- correctly excluded
- `NULL != ''` = NULL, `NULL IS NOT NULL` = false, `NULL AND false` =
false -- correctly excluded
- `'Alice' != ''` = true, `'Alice' IS NOT NULL` = true, `true AND true`
= true -- correctly included
Affects `neq` filters on TEXT fields and all composite sub-fields
(firstName, lastName, primaryEmail, primaryPhoneNumber, address
sub-fields, etc.) when filtering against null-equivalent values.
Co-authored-by: Etienne <45695613+etiennejouan@users.noreply.github.com>
Resolves [Dependabot Alert
491](https://github.com/twentyhq/twenty/security/dependabot/491).
Expecting it to resolve a few other minimatch generated alerts too, but
merging shall confirm which ones since minimatch has a lot of different
versions being imported by different packages as a transitive
dependency.
## Summary
- add the retro `VT323` font to the marketing site and apply it across
the Salesforce pricing card and popups
- expand the Salesforce pricing simulator with per-row metadata, unique
popup messages, dynamic price calculation, enterprise shared-cost
handling, and fixed-cost totals
- align the Salesforce card UI with the wireframes: sticky pricing
header, updated checkbox states, popup styling/behavior, add-on link,
and footer cleanup
- remove obsolete shared popup constants and quote form logic tied to
the Salesforce card
- refresh nearby pricing page UI details, including sticky menu behavior
and related pricing section polish
## Testing
- `yarn workspace twenty-website-new build`
## Summary
- **Inline email reply**: Replace external email client redirects
(Gmail/Outlook deeplinks) with an in-app email composer. Users can reply
to email threads directly from the email thread widget or via the
command menu.
- **SendEmail GraphQL mutation**: New backend mutation that reuses
`EmailComposerService` for body sanitization, recipient validation, and
SMTP dispatch via the existing outbound messaging infrastructure.
- **Side panel compose page**: Command menu "Reply" action now opens a
side-panel compose email page with pre-filled To, Subject, and
In-Reply-To fields.
### Backend
- `SendEmailResolver` with `SendEmailInput` / `SendEmailOutputDTO`
- `SendEmailModule` wired into `CoreEngineModule`
- Reuses `EmailComposerService` + `MessagingMessageOutboundService`
### Frontend
- `EmailComposer` / `EmailComposerFields` components
- `useSendEmail`, `useReplyContext`, `useEmailComposerState` hooks
- `useOpenComposeEmailInSidePanel` + `SidePanelComposeEmailPage`
- `EmailThreadWidget` inline Reply bar with toggle composer
- `ReplyToEmailThreadCommand` now opens side-panel instead of external
links
### Seeds
- Added `handle` field to message participant seeds for realistic email
addresses
- Seed `connectedAccount` and `messageChannel` in correct batch order
## Test plan
- [ ] Open an email thread on a person/company record → verify
"Reply..." bar appears below the last message
- [ ] Click "Reply..." → composer opens inline with pre-filled To and
Subject
- [ ] Type a message and click Send → email is sent via SMTP, composer
closes
- [ ] Use command menu Reply action → side panel opens with compose
email page
- [ ] Verify Send/Cancel buttons work correctly in side panel
- [ ] Test with Cc/Bcc toggle in composer fields
- [ ] Verify error handling: invalid recipients, missing connected
account
Made with [Cursor](https://cursor.com)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
- Introduces a pluggable `WebSearchDriver` abstraction (interface,
factory, service, module) so web search is no longer tied to native
provider tools (Anthropic/OpenAI)
- **Exa** is the first driver implementation with support for
category-filtered search (company, people, news, research paper, etc.) —
particularly useful for CRM workflows
- Per-query billing for both Exa ($0.007/query) and native provider
surcharges ($0.01/query for Anthropic/OpenAI) via the existing
`USAGE_RECORDED` pipeline
- New config variables: `WEB_SEARCH_DRIVER` (EXA/DISABLED),
`EXA_API_KEY`, `WEB_SEARCH_PREFER_NATIVE` (default false — prefers Exa
over native when both available)
- `WEB_SEARCH` operation type added for usage tracking and Stripe
metering
### Architecture
```
WebSearchDriver (interface)
├── ExaDriver — Exa neural search with category support
└── DisabledDriver — throws when search is disabled
WebSearchDriverFactory (extends DriverFactoryBase)
└── creates driver based on WEB_SEARCH_DRIVER config
WebSearchService (facade)
├── search(query, options?, billingContext?)
├── isEnabled()
└── emits USAGE_RECORDED events per query
WebSearchTool (Tool implementation)
└── registered in ActionToolProvider, available via tool catalog
```
### Native search billing gap fixed
Anthropic and OpenAI both charge $0.01/search on top of token costs. The
token costs were already billed, but the per-call surcharge was not.
Added `countNativeWebSearchCallsFromSteps` utility +
`billNativeWebSearchUsage` to `AiBillingService`, wired into both chat
and workflow agent paths.
## Test plan
- [ ] Set `WEB_SEARCH_DRIVER=EXA` + `EXA_API_KEY=...` and verify AI chat
can search the web
- [ ] Verify category parameter works (ask about a specific
company/person)
- [ ] Set `WEB_SEARCH_DRIVER=DISABLED` and verify search tool is not
exposed
- [ ] Set `WEB_SEARCH_PREFER_NATIVE=true` with Anthropic model and
verify native search is used
- [ ] Verify usage events are emitted in ClickHouse for both Exa and
native search paths
- [ ] Verify existing billing tests pass (`npx jest
ai-billing.service.spec.ts`)
Made with [Cursor](https://cursor.com)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
## Summary
- Move email thread display from side panel to a dedicated record page
with a new `EMAIL_THREAD` widget type
- Add message thread as a standard object with page layout, subject
field, and backfill command
- Add reply-to-email command menu item for message thread records
- Remove old side panel message thread components in favor of the new
widget-based approach
## Type fixes
- Add `EMAIL_THREAD` to `WidgetConfigurationType`, `WidgetType`, and all
configuration/validator maps
- Create `EmailThreadConfigurationDTO` and shared
`EmailThreadConfiguration` type
- Register EMAIL_THREAD in widget type validators, configuration
resolvers, and standard widget mappings
## Test plan
- [ ] Verify message thread record pages render with the email thread
widget
- [ ] Verify email thread preview navigates to the record page instead
of opening side panel
- [ ] Verify reply-to-email command appears for message thread records
- [ ] Verify typecheck passes for both twenty-front and twenty-server
- [ ] Run existing test suites to check for regressions
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This PR migrates `messageFolder`.`parentFolderId` from an internal
`UUID` reference to external provider id.
Eliminates unnecessary lookup and complexity
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
This PR implements a whole lot of fixes and updates to the website. It
adds a release-notes page, terms and conditions page, privacy policy
page, while fixing HomeStepper, ProductStepper, and WhyTwentyStepper. 3D
models still need to be styled and their sizes need to be fixed.
# Introduction
Typeorm migration are now associated to a given twenty-version from the
`UPGRADE_COMMAND_SUPPORTED_VERSIONS` that the current twenty core engine
handles
This way when we upgrade we retrieve the migrations that need to be run,
this will be useful for the cross-version incremental upgrade so we
preserve sequentiality
## What's new
To generate
```sh
npx nx database:migrate:generate twenty-server -- --name add-index-to-users
```
To apply all
```sh
npx nx database:migrate twenty-server
```
## Next
Introduce slow and fast typeorm migration in order to get rid of the
save point pattern in our code base
Create a clean and dedicated `InstanceUpgradeService` abstraction
## Summary
- Replaces `npm pkg set version=X` with a direct `node -e` script in the
`set-local-version` Nx target
- Fixes `CI Create App E2E minimal` workflow failures on fork PRs where
`npm pkg set` fails with `EJSONPARSE: Unexpected end of JSON input while
parsing empty string`
The `npm` command has unreliable `package.json` resolution in certain CI
checkout contexts (shallow clones of fork PR merge refs). Using `node`
directly to read/write the JSON file avoids this entirely.
The minimalMetadata query was returning untranslated English labels for
standard objects (Companies, People, Opportunities, etc.) when users
requested zh-CN locale. This fix adds translation logic to
MinimalMetadataService using the existing I18nService.
Changes:
- Add translateLabel() method to translate standard object labels
- Pass locale from GraphQL context through resolver to service
- Only translate non-custom objects (custom objects use user-defined
labels)
Co-authored-by: Kevin Yu <kevin.yu@example.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Automated daily sync of `ai-providers.json` from
[models.dev](https://models.dev).
This PR updates pricing, context windows, and model availability based
on the latest data.
New models meeting inclusion criteria (tool calling, pricing data,
context limits) are added automatically.
Deprecated models are detected based on cost-efficiency within the same
model family.
**Please review before merging** — verify no critical models were
incorrectly deprecated.
Co-authored-by: FelixMalfait <6399865+FelixMalfait@users.noreply.github.com>
## Summary
- **Remove `ExecuteToolResult` wrapper** — `execute_tool` is now a
transparent dispatcher that returns the raw `ToolOutput` from underlying
tools. No more `{ toolName, result }` envelope.
- **Type the entire execution chain as `Promise<ToolOutput>`** — from
`ToolExecutorService.dispatch()` through `resolveAndExecute()` to
`execute_tool.execute()`. Zero `Promise<unknown>` remaining in the tool
layer.
- **Use `Extract<ToolExecutionRef, ...>`** for dispatch methods,
enabling exhaustive switch checking and removing `as never` casts.
- **Relax `ToolOutput.result` to accept `null`** — removes `??
undefined` hacks at the boundary with logic function results.
- **Enforce 1-export-per-file** across tool type/interface files (split
`tool-descriptor.type.ts`, `tool-provider.interface.ts`, `tool.type.ts`,
`tool-output.type.ts`, `tool-executor.service.ts`).
- **Simplify error handling** — `wrapWithErrorHandler` and all
meta-errors (tool not found, tool excluded) now return consistent
`ToolOutput` shape with `error` as a plain string.
- **Frontend reads output directly** — removed `unwrapToolOutput`
utility; `ToolStepRenderer` and `ThinkingStepsDisplay` extract
`message`/`error` from the raw output with simple type guards.
- **Add permission error detection** for email tools via
`isInsufficientPermissionsError`, guiding the AI model to suggest
account reconnection instead of hallucinating about visibility settings.
## Test plan
- [ ] AI chat tool calls return visible output (not "null") in the UI
- [ ] Tool errors display correctly in the JSON tree
- [ ] Email draft/send tools return actionable permission errors
- [ ] Code interpreter output renders correctly
- [ ] Thinking steps display tool outputs properly
Made with [Cursor](https://cursor.com)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Bug fixes exposed by always-on direct execution
1. GraphQL spec compliance — data[field] = null on resolver error
direct-execution.service.ts — Changed from Promise.allSettled (which
lost the responseKey on rejection) to Promise.all with per-field
try/catch; errors now set data[responseKey] = null per spec
2. Empty object arguments skipped (extractArgumentsFromAst)
extract-arguments-from-ast.util.ts — Removed isEmptyObject check;
filter: {}, data: {} now correctly passed to resolvers instead of
silently dropped (which caused permissions to never be checked)
3. orderBy: {} factory default treated as "no ordering"
direct-execution.service.ts — Before calling the resolver, strips
orderBy: {} and orderByForRecords: {} (empty-object factory defaults
that mean "no ordering")
assert-find-many-args.util.ts / assert-group-by-args.util.ts — Accept {}
for orderBy without throwing
4. orderBy: { field: '...' } object auto-coerced to [{ field: '...' }]
array
direct-execution.service.ts — Applies GraphQL list coercion: a
non-array, non-empty orderBy object is wrapped in an array before
assertion and resolver call
5. totalCount and aggregate fields returned as strings from PostgreSQL
graphql-format-result-from-selected-fields.util.ts — Added
coerceAggregateValue that parses numeric strings to numbers for
totalCount, sum*, avg*, min*, max*, count*, percentageOf* fields
Test updates
nested-relation-queries.integration-spec.ts — Updated expected error
message from Yoga schema-validation message to direct execution resolver
message
~30 snapshot files — Updated to reflect direct execution's error
messages (different from Yoga schema-validation messages for input type
errors)
- WorkflowCronTriggerCronJob was querying every active workspace (~700)
sequentially every minute to find cron triggers, causing regular CPU
spikes to 100% on worker pods
- Added a Redis hashset cache that stores cron triggers. On cache miss
(TTL expired, cold start, or explicit invalidation), a full scan
rebuilds the cache
- Creating/deleting a new cron trigger updates the cache, only if exists
Fixes https://github.com/twentyhq/twenty/issues/19309
In exclusion mode (select all), selectedRecords is always [] since
individual record IDs aren't tracked. The noneDefined()/everyDefined()
checks return false on empty arrays by design, which hides the delete,
restore, and destroy commands from the command menu.
- Wrap selectedRecords array checks with (isSelectAll or ...) to bypass
when in exclusion mode
- Remove the `numberOfSelectedRecords < 10000` limit
- Add `upgrade:1-21:fix-select-all-command-menu-items` command to
backfill existing workspaces
- Add tests
- simplify the base application template
- remove --exhaustive option and replace by a --example option like in
next.js https://nextjs.org/docs/app/api-reference/cli
- Fix some bugs and logs
- add a post-card app in twenty-apps/examples/
## Summary
Rebased version of #19051 with all review comments addressed. Clean
branch on latest main, lint/typecheck/tests passing.
### Changes from original PR
- AI can now create, update, and delete **view filters**
(`ViewFilterToolsFactory`) and **view sorts** (`ViewSortToolsFactory`)
- `create_view` now accepts `calendarFieldName`, `calendarLayout`, and
`fieldNames` to configure views at creation time
- Three new standard skills: `view-building`, `view-filters-and-sorts`,
`custom-objects-cleanup`
- `workspace-demo-seeding` skill reworked to keep standard objects and
enrich them with custom fields
- Cache invalidation for nav menu items when object `isActive` changes
- Dashboard tool descriptions improved (RECORD_TABLE widget workflow)
### Review comments addressed (all 10 from #19051)
1. **Sentry + Cubic**: Calendar field DATE/DATE_TIME validation — added
`resolveCalendarFieldMetadataId` using `isFieldMetadataDateKind`
2. **Cubic**: "navigate tool" → "navigate_app tool" in skill metadata
(all 7 occurrences)
3. **Copilot**: KANBAN views now require `mainGroupByFieldName` — throws
clear error if missing
4. **Copilot**: CALENDAR views now require both `calendarFieldName` and
`calendarLayout` — validated before DB call
5. **Copilot**: Mock field fixtures include `type` property (DATE_TIME,
TEXT, SELECT)
6. **Copilot**: `ViewFilterValue` type assertion instead of unsafe `as
string` casts (3 locations)
7. **FelixMalfait**: Removed
`NavigationMenuItemObjectDeactivationListener` — replaced with cache
invalidation
8. **FelixMalfait**: Consolidated `ViewFilterToolProvider` and
`ViewSortToolProvider` into single `ViewToolProvider`
9. Removed `VIEW_FILTER` and `VIEW_SORT` from `ToolCategory` enum
(merged into `VIEW`)
10. Removed stale `existingFeatureFlagsMap` param incompatible with
current main
## Test plan
- [x] `npx nx lint:diff-with-main twenty-server` — passes
- [x] `npx nx typecheck twenty-server` — passes
- [x] `view-tools.factory.spec.ts` — all 20 tests pass (including 3 new
validation tests)
Supersedes #19051https://claude.ai/code/session_01QPV74NU6vzmJb32e4i899E
---------
Co-authored-by: Claude <noreply@anthropic.com>
Automated daily sync of `ai-providers.json` from
[models.dev](https://models.dev).
This PR updates pricing, context windows, and model availability based
on the latest data.
New models meeting inclusion criteria (tool calling, pricing data,
context limits) are added automatically.
Deprecated models are detected based on cost-efficiency within the same
model family.
**Please review before merging** — verify no critical models were
incorrectly deprecated.
Co-authored-by: FelixMalfait <6399865+FelixMalfait@users.noreply.github.com>
## Summary
- Bumps `handlebars` from `^4.7.8` to `^4.7.9` in
`packages/twenty-shared`
## Why
- **CVE-2026-33937** — Prototype pollution via crafted template input
- **GHSA-2w6w-674q-4c4q** — Related handlebars security advisory
- Severity: **Critical** (CVSS 9.8)
- Detected by Trivy and Grype scanning `twentycrm/twenty:v1.20.0`
## What changed
- `packages/twenty-shared/package.json`: `"handlebars": "^4.7.8"` →
`"^4.7.9"`
- `yarn.lock`: updated accordingly
## Impact
`handlebars` is used in `packages/twenty-shared` for template
evaluation. The fix patches the prototype pollution vector without any
API changes.
## Test plan
- [ ] `yarn build` passes
- [ ] `yarn test` passes in twenty-shared
- [ ] Template evaluation works as before
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Abdullah <125115953+mabdullahabaid@users.noreply.github.com>
**Summary**
- update the home hero navbar controls and surface styling to better
match the latest visual design
- simplify row hover actions by removing edit affordances and disabling
hover controls for `createdBy` and `accountOwner`
- tighten chip typography and spacing for more consistent hero table
rendering
- include the captured Playwright screenshot artifact for reference
**Testing**
- Not run (not requested)
## Summary
- **Make app-synced objects searchable**: `isSearchable` was hardcoded
to `false` and the `searchVector` field was missing the `GENERATED
ALWAYS AS (...)` expression, causing all records to have a `NULL` search
vector and be excluded from search results. Fixed by defaulting
`isSearchable` to `true` (configurable via the object manifest),
computing the `asExpression` from the label identifier field, and
allowing the update-field-action-handler to handle the `null` → defined
`asExpression` transition.
- **Make `isSearchable` updatable on an object**: The property had
`toCompare: false` in the entity properties configuration, so updates
via the API were silently ignored and never persisted. Fixed by setting
`toCompare: true`.
Fixes: #19230
CSV exports containing Arabic, Chinese, Japanese, Korean characters
displayed as garbled text in Excel because the file lacked a UTF-8 BOM.
So I added `\uFEFF` prefix to the CSV Blob so Excel and other
spreadsheet apps correctly interpret the file as UTF-8
## Table value
<img width="370" height="277" alt="image"
src="https://github.com/user-attachments/assets/ed668f00-93f0-4991-bc87-42e3cb314dbc"
/>
## Before
<img width="206" height="160" alt="image"
src="https://github.com/user-attachments/assets/a5324dbc-bd65-4443-a7fb-78047028bf91"
/>
## After
<img width="180" height="155" alt="image"
src="https://github.com/user-attachments/assets/cae5f38d-6c91-4017-96ae-9924bb41d0e0"
/>
And those are my test data
Script | Example | Language
-- | -- | --
Arabic | مرحبا | Arabic, Persian, Urdu
Chinese | 你好 | Chinese
Japanese | こんにちは | Japanese
Korean | 안녕하세요 | Korean
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Félix Malfait <felix@twenty.com>
Co-authored-by: Félix Malfait <felix.malfait@gmail.com>
Auto merged passed to fast on
https://github.com/twentyhq/twenty/pull/19271
Try catch has been added while debugging an integration test suite
covering the workspace creation that would fail due to sdk generation
failure
As they're run synchronously in integration tests they would stop the
workspace creation process
If the proposed fix here isn't enough I'll wrapp the run sync test to be
catching any unexpected issue
## PR description
The previous approach used a detached flag to represent selection mode,
which could was not synced sync with actual UI selection state. This
change ties the edit mode's selection/fallback behavior directly to
whether records are truly selected in the record index.
- Removes `commandMenuItemEditSelectionModeState` (a standalone 'none' |
'selection' atom) and replaces it with
`mainContextStoreHasSelectedRecordsSelector`, which derives selection
state from the real record selection in the context store.
- Adds `useSelectFirstRecordForEditMode` to programmatically select the
first record (table row or kanban card) when toggling to selection mode,
and useResetRecordIndexSelection to clear selection when toggling off or
exiting edit mode.
- Ensures record selection is properly cleaned up when exiting layout
customization mode or closing the side panel.
## Video QA
### Table
https://github.com/user-attachments/assets/1d845809-8369-4872-b3a9-a0281b8e464b
### Board
https://github.com/user-attachments/assets/325c2d58-4ca0-408a-ab4a-66b97d9d70de
## Summary
- Adds `IS_AI_ENABLED` to `PUBLIC_FEATURE_FLAGS` so it appears in
**Settings > Lab** as a self-serve toggle for users
- Includes a cover image (`is-ai-enabled.png`) hosted on
`twenty.com/images/lab/`, following the existing convention for lab
feature flag images
- AI is now ready for public beta — users can enable it directly from
the Lab
## Test plan
- [ ] Verify the AI card appears in Settings > Lab with the cover image
banner
- [ ] Toggle AI on/off and confirm the feature flag is persisted
- [ ] Confirm AI features (agent chat, etc.) activate when the flag is
enabled
Made with [Cursor](https://cursor.com)
Avoid overloading the workspace creation with the standard and custom
app sdk generation, do through a job
```ts
[Nest] 90397 - 04/02/2026, 4:44:20 PM LOG [CoreEntityCacheService] Cache stats: localCacheSize=0 memoizerCacheSize=0 memoizerPendingSize=0 entriesByKey={} versionsByKey={}
[Nest] 90397 - 04/02/2026, 4:45:03 PM LOG [BullMQDriver] Processing job 4325 with name CallWebhookJobsForMetadataJob on queue webhook-queue
[Nest] 90397 - 04/02/2026, 4:45:03 PM LOG [BullMQDriver] Processing job 1 with name GenerateSdkClientJob on queue workspace-queue
[Nest] 90397 - 04/02/2026, 4:45:03 PM LOG [BullMQDriver] Job 1 with name GenerateSdkClientJob processed on queue workspace-queue in 0.64ms
[Nest] 90397 - 04/02/2026, 4:45:03 PM LOG [BullMQDriver] Processing job 2 with name GenerateSdkClientJob on queue workspace-queue
[Nest] 90397 - 04/02/2026, 4:45:03 PM LOG [BullMQDriver] Job 2 with name GenerateSdkClientJob processed on queue workspace-queue in 0.06ms
[Nest] 90397 - 04/02/2026, 4:45:03 PM LOG [BullMQDriver] Job 4325 with name CallWebhookJobsForMetadataJob processed on queue webhook-queue in 17.47ms
[Nest] 90397 - 04/02/2026, 4:45:03 PM LOG [BullMQDriver] Processing job 4326 with name CallWebhookJobsForMetadataJob on queue webhook-queue
```
- Replace soft-deletion (deletedAt) with isActive boolean for
overridable entities (tabs, widgets, viewFieldGroups, viewFields).
Standard entities are deactivated (isActive: false) when removed from
update payloads, while custom entities are hard-deleted.
- When a viewFieldGroup is deactivated/deleted, its viewFields are
reassigned to the next section by position (or null if none remain).
- Add isActive: true filters to viewFieldGroup and viewField API queries
so deactivated entities are excluded from responses.
Next:
- fields-widget-upsert.service.ts should be refactored a bit
- Add restore logic
Insert operations blocked by field-level update permissions on
non-editable fields
The insert code path in permissions.utils.ts fell through to the update
case (no break), causing validateUpdateFieldPermissionOrThrow to reject
inserts when any field had "Edit disabled" which could conflict with RLS
predicates (used for insertion of new records)
Fixes https://github.com/twentyhq/twenty/issues/19201
We will keep checking update permissions for insertion (until we decide
to have a separate permission flag for insertion) but to fix the issue
we will skip this part if it conflicts with an RLS predicate
This PR makes the Home Visual interactive.
Not the perfect code, but does the trick for now to convey what we're
trying to do. As per the conversation with Thomas, we will iterate over
it and make it better when we get past the first release.
---------
Co-authored-by: Thomas des Francs <tdesfrancs@gmail.com>
# Introduction
Currently preparing the `UpgradeCommand` refactor, in this way started
by cleaning up the existing avoiding unecessary dependencies to others
services allowing easier readability and concern centralization for
upcoming refactor
The UpgradeCommand was extending up to five classes, overriding
abstracted class and so on. It was also cascade
injecting 3 services
Introducing the `WorkspaceIteratorService` that centralize the commands
set to run over a single workspace logic shared between both atomic
upgrade command call and global upgradeCommand
## Tradeoff
Duplicated `@Option` between both `UpgradeCommandRunner` and
`WorkspaceMigrationRunner`
## Before
```
UpgradeCommand
└─ extends UpgradeCommandRunner
└─ extends ActiveOrSuspendedWorkspacesMigrationCommandRunner
└─ extends WorkspacesMigrationCommandRunner (owns workspace iteration loop + ORM deps)
└─ extends MigrationCommandRunner (dry-run, verbose, error handling)
└─ extends CommandRunner (nest-commander)
```
## Now
```
UpgradeCommand
└─ extends UpgradeCommandRunner
└─ extends CommandRunner (nest-commander)
uses ─► Services (via composition)
```
## Logging management
At the moment all services are logging, in the best of the world only
the runners should be doing so
The backend validation for field permissions was using !== null to check
canReadFieldValue and canUpdateFieldValue, but these optional GraphQL
fields can also be undefined when omitted from the input. This caused
the validation to incorrectly reject legitimate requests (e.g.,
restricting only "update" without specifying "read") with the error
"Field permissions can only be used to restrict access, not to grant
additional permissions."
Replaced !== null checks with isDefined() so that both null and
undefined are treated as "no opinion" on that permission.
To reproduce:
- Create a single FieldPermission on an object with canEdit: false
without any other rule on that same object
## Summary
- Set `accent="blue"` on InformationBanner action button so it renders
blue instead of default gray
- Add Banner storybook stories for all color × variant combinations
(BluePrimary, BlueSecondary, DangerPrimary, DangerSecondary)
- Use `useUserTimezone()` in `SettingsDatePickerInput` instead of
browser timezone (`Temporal.Now.timeZoneId()`) so dates respect the
admin's profile timezone preference
- Separate `onChange` from `onClose` in `SettingsDatePickerInput` so
changing the hour no longer forces the date picker to close
## Summary
- **Apollo factory** (`apollo.factory.ts`): Bail early with `EMPTY` when
no token pair exists so no request is forwarded without credentials.
Propagate renewal success/failure as a boolean so failed renewals stop
the operation chain instead of forwarding with stale tokens.
- **Refresh token service** (`refresh-token.service.ts`): When a revoked
refresh token is reused past the grace period, reject only that token
instead of mass-revoking all user tokens. The most common cause is a
lost renewal response (e.g. navigation during refresh), not actual token
theft. This eliminates the "Suspicious activity detected" errors users
were seeing. Also switches to `findOneBy` since the `appTokens` relation
is no longer needed.
## Test plan
- [x] `refresh-token.service.spec.ts` — all 7 tests pass
- [ ] Verify login flow: sign in from `app.localhost`, get redirected to
workspace subdomain without "Suspicious activity" errors
- [ ] Verify token renewal: let access token expire, confirm silent
renewal works and operations resume
- [ ] Verify concurrent tabs: open multiple tabs, let tokens expire,
confirm no mass revocation cascade
Made with [Cursor](https://cursor.com)
---------
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
## Summary
- Adds an index on `viewField.viewFieldGroupId` to fix a ~28s `DELETE
FROM core.viewFieldGroup WHERE workspaceId = $1` query observed in
production
- The slow query is triggered during workspace hard deletion
(`deleteWorkspaceSyncableMetadataEntities` in `workspace.service.ts`)
- Root cause: the FK constraint `viewField.viewFieldGroupId →
viewFieldGroup.id` with `ON DELETE SET NULL` forces PostgreSQL to
sequentially scan the entire `viewField` table for every deleted
`viewFieldGroup` row, because no index exists on the referencing column
- Uses `CREATE INDEX CONCURRENTLY` in the migration to avoid locking the
table on production
### Context
GraphQL introspection queries (__schema, __type) were going through the
full Yoga server pipeline, which forces a complete workspace schema
build, loading flat metadata maps, building all GraphQL types, wiring
all resolver factories, and calling makeExecutableSchema. This is
expensive, even though introspection only needs the type structure and
zero resolver execution.
A new WorkspaceGraphqlSchemaSDLService extracts the SDL computation that
was previously embedded inside WorkspaceSchemaFactory.
From `direct-execution.service.ts` :
- `buildSchema(sdl)` reconstructs a resolver-free GraphQLSchema from the
SDL
- pure CPU, not cached, not sure it worths it ?
- `execute({ schema, document, variableValues })` from graphql-js,
introspection is answered entirely by the graphql-js runtime from type
metadata, no resolver execution needed
#### Nice to do ?
- Use new cache service for typeDefs ?
### Renaming bonus: `typeDefs` → `sdl`
`typeDefs` is an Apollo/graphql-tools convention. It's the parameter
name in
`makeExecutableSchema({ typeDefs, resolvers })`, not a native GraphQL
spec term.
In proper GraphQL semantics, what this service produces is the **SDL**
(Schema
Definition Language): the official term for the string representation of
a schema.
`printSchema()` produces it, `buildSchema()` consumes it.
##### Why the distinction matters
- **Type definitions** implies partial type declarations (objects,
scalars, enums…)
- **Schema SDL** conveys a *complete* schema document: all types
**plus** the root
operation types (`Query`, `Mutation`) which is exactly what
`printSchema(schema)`
produces
This PR completes the markup for all the sections of all pages of the
new website. There are a lot of improvements to come, but the entire
structure is in place now.
closes
https://discord.com/channels/1130383047699738754/1480991390782455838
- Use data-code-execution as the streaming source of truth and hide
duplicate code-interpreter tool parts (including tool-execute_tool
wrappers).
- Ensure wrapped execute_tool code-interpreter outputs still render
correctly after refetch.
- Gate code-interpreter server behavior by enablement state and keep
assistant messages full-width to avoid streaming vs completed width
shifts.
Co-authored-by: Félix Malfait <felix.malfait@gmail.com>
## Summary
- Add `WORKSPACE_SCHEMA_DDL_LOCKED` env-only boolean config variable
that blocks all workspace schema DDL changes when set to `true`. This is
intended for hot upgrades where logical replication cannot handle DDL
changes. Enforced at two chokepoints:
- `WorkspaceMigrationRunnerService.run` — blocks all metadata-driven DDL
(object/field/index CRUD, app sync/uninstall, standard app sync, upgrade
commands)
- `WorkspaceDataSourceService.createWorkspaceDBSchema` /
`deleteWorkspaceDBSchema` — blocks workspace creation (sign-up) and hard
deletion. Uses a dedicated `WorkspaceDataSourceException` (not
ForbiddenException)
- Add maintenance mode feature with Admin Panel UI and user-facing
banner:
- **Backend**: `MaintenanceModeService` stores maintenance window
(startAt, endAt, optional link) in `core.keyValuePair` as
`CONFIG_VARIABLE`. Validates endAt > startAt. Uses `GraphQLISODateTime`
scalar for date fields. Exposed via `clientConfig` REST endpoint and
admin GraphQL mutations (`setMaintenanceMode`, `clearMaintenanceMode`)
- **Admin Panel**: New "Maintenance Mode" section in Health tab with UTC
datetime pickers and activate/deactivate controls
- **Banner**: `InformationBannerMaintenance` displayed at the top of
`DefaultLayout` for all users, using Temporal API for timezone-aware
formatting with an optional "Learn more" link
These two features are **independent** — the DDL lock is controlled via
env var for operational use, while maintenance mode is a UI notification
mechanism controlled from the admin panel.
- Deduplicate standard record command menu items by merging
single-record and multi-record delete, restore, destroy, and export
actions into shared record commands.
- Update frontend command registrations and engine component mappings to
use the new unified keys, while keeping deprecated engine keys
temporarily mapped for backward compatibility.
- Add a 1-21 upgrade command that removes legacy command menu items from
workspaces and creates the new unified standard items.
- `frontComponentHostCommunicationApi` gets a new object reference on
every render, causing the `useMemo`/`useEffect` in
`FrontComponentWorkerEffect` to tear down and re-create the web worker
each time.
- Decouple the host API lifecycle from the worker lifecycle by moving
thread.exports updates into a dedicated
`FrontComponentUpdateHostCommunicationApiEffect` that mutates the
thread's exports object in place via Object.assign.
- Rename `FrontComponentHostCommunicationApiEffect` to
`FrontComponentInitializeHostCommunicationApiEffect` for clarity.
## Before
https://github.com/user-attachments/assets/6f3a5c14-2ae7-4317-82b5-1625abb4143e
## After
https://github.com/user-attachments/assets/1059d6cd-e02c-4477-b3e8-8e965a716434
Bumps [qs](https://github.com/ljharb/qs) from 6.14.2 to 6.15.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/ljharb/qs/blob/main/CHANGELOG.md">qs's
changelog</a>.</em></p>
<blockquote>
<h2><strong>6.15.0</strong></h2>
<ul>
<li>[New] <code>parse</code>: add <code>strictMerge</code> option to
wrap object/primitive conflicts in an array (<a
href="https://redirect.github.com/ljharb/qs/issues/425">#425</a>, <a
href="https://redirect.github.com/ljharb/qs/issues/122">#122</a>)</li>
<li>[Fix] <code>duplicates</code> option should not apply to bracket
notation keys (<a
href="https://redirect.github.com/ljharb/qs/issues/514">#514</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d9b4c66303"><code>d9b4c66</code></a>
v6.15.0</li>
<li><a
href="cb41a545a3"><code>cb41a54</code></a>
[New] <code>parse</code>: add <code>strictMerge</code> option to wrap
object/primitive conflicts in...</li>
<li><a
href="88e15636da"><code>88e1563</code></a>
[Fix] <code>duplicates</code> option should not apply to bracket
notation keys</li>
<li><a
href="9d441d2704"><code>9d441d2</code></a>
Merge backport release tags v6.0.6–v6.13.3 into main</li>
<li><a
href="85cc8cac6b"><code>85cc8ca</code></a>
v6.12.5</li>
<li><a
href="ffc12aa710"><code>ffc12aa</code></a>
v6.11.4</li>
<li><a
href="0506b11e45"><code>0506b11</code></a>
[actions] update reusable workflows</li>
<li><a
href="6a37fafc75"><code>6a37faf</code></a>
[actions] update reusable workflows</li>
<li><a
href="8e8df5a3b1"><code>8e8df5a</code></a>
[Fix] fix regressions from robustness refactor</li>
<li><a
href="d60bab35a4"><code>d60bab3</code></a>
v6.10.7</li>
<li>Additional commits viewable in <a
href="https://github.com/ljharb/qs/compare/v6.14.2...v6.15.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
## Summary
- **Queue messages while streaming**: Messages sent during active AI
streaming are queued server-side and auto-flushed when the current
stream completes. Frontend renders queued messages optimistically in a
dedicated queue UI.
- **Drop `@ai-sdk/react` + `resumable-stream`**: Replace the dual HTTP
SSE + AI SDK client architecture with a single GraphQL SSE subscription
per thread. All events (token chunks, message persistence, queue
updates, errors) flow through Redis PubSub → GraphQL subscription.
- **Server-driven architecture**: The server decides whether to queue or
stream (via `POST /:threadId/message`). The frontend mirrors this
decision for optimistic rendering but defers to the server response.
- **Reuse AI SDK accumulation logic**: `readUIMessageStream` from the
`ai` package handles chunk-to-message accumulation on the frontend,
avoiding a custom 780-line accumulator.
## Key files
**Backend:**
- `agent-chat-event-publisher.service.ts` — publishes events to Redis
PubSub
- `agent-chat-subscription.resolver.ts` — GraphQL subscription resolver
- `stream-agent-chat.job.ts` — publishes chunks via PubSub instead of
resumable-stream
- `agent-chat.controller.ts` — unified `POST /:threadId/message`
endpoint
**Frontend:**
- `useAgentChatSubscription.ts` — subscribes to `onAgentChatEvent`,
bridges to `readUIMessageStream`
- `useAgentChat.ts` — send/stop/optimistic rendering (no more AI SDK)
- `AgentChatStreamSubscriptionEffect.tsx` — replaces
`AgentChatAiSdkStreamEffect.tsx`
## Test plan
- [ ] Send message on new thread → optimistic render, streaming response
appears
- [ ] Send message while streaming → queued instantly (no flash in main
thread)
- [ ] Queued message auto-flushes after current stream completes
- [ ] Remove queued message via queue UI
- [ ] Stop streaming mid-response
- [ ] Leave chat idle for several minutes → streaming still works after
(SSE client recycling)
- [ ] Token refresh during session → requests succeed (authenticated
fetch)
- [ ] Switch threads while streaming → clean subscription handoff
Made with [Cursor](https://cursor.com)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Context
Add a new "Manage" section in FIELDS widget side panel with a "Delete
widget" action
Next step: Deleting a non-custom fields widget should be reversible
("Reset to default" followup)
<img width="1286" height="398" alt="Screenshot 2026-03-31 at 15 17 31"
src="https://github.com/user-attachments/assets/9ee63c14-52f1-470e-9112-4538271dc6fc"
/>
- Adds a resolveObjectMetadataLabel utility that returns the singular or
plural label from object metadata based on the number of selected
records (e.g., "person" vs "people").
- Exposes a pre-computed objectMetadataLabel string on
CommandMenuContextApi, making it available for label interpolation
(e.g., Delete ${capitalize(objectMetadataLabel)}).
As per title, update a documentation on how to upload a file given
increasing amount of questions for this problem
CC: @StephanieJoly4
---------
Co-authored-by: Etienne <45695613+etiennejouan@users.noreply.github.com>
📄 Summary
This PR upgrades the nodemailer dependency to a secure version (≥ 8.0.4)
to fix a known SMTP command injection vulnerability
(GHSA-c7w3-x93f-qmm8).
🚨 Issue
The current version used in twenty-server (^7.0.11, resolved to 7.0.11 /
7.0.13) is vulnerable to SMTP command injection due to improper
sanitization of the envelope.size parameter.
This could allow CRLF injection, potentially enabling attackers to add
unauthorized recipients to outgoing emails.
🔍 Root Cause
The vulnerability originates from insufficient validation of
user-controlled input in the SMTP envelope, specifically the size field,
which can be exploited via crafted input containing CRLF sequences.
✅ Changes
Upgraded nodemailer to version ^8.0.4
Ensured compatibility with existing email sending logic
Verified that no breaking changes affect current usage
🔐 Security Impact
This update mitigates the risk of:
SMTP command injection
Unauthorized email recipient manipulation
Potential data leakage via crafted email payloads
📎 References
GHSA: GHSA-c7w3-x93f-qmm8
CVE: (see linked report in issue)
---------
Co-authored-by: Félix Malfait <felix.malfait@gmail.com>
Co-authored-by: Charles Bochet <charlesBochet@users.noreply.github.com>
## Summary
- Replaces `twentycrm/twenty-postgres-spilo` with the official
`postgres:16` image across all 7 CI workflow files
- Removes Docker Hub `credentials` blocks from all service containers
(postgres, redis, clickhouse)
- Removes the `Login to Docker Hub` step from the breaking changes
workflow
## Context
Fork PRs cannot access repository secrets/variables, causing `${{
vars.DOCKERHUB_USERNAME }}` and `${{ secrets.DOCKERHUB_PASSWORD }}` to
resolve to empty strings. GitHub Actions rejects empty credential values
at template validation time, failing the job before any step runs.
The custom spilo image was the original reason credentials were needed
(to avoid Docker Hub rate limits on non-official images). The only
Postgres extensions required in CI (`uuid-ossp`, `unaccent`) are built
into the official `postgres:16` image. Official Docker Hub images have
significantly higher pull rate limits and don't require authentication.
## Summary
- Adds `search` and `searches` to the `RESERVED_METADATA_NAME_KEYWORDS`
list in `twenty-shared`
- Prevents users from creating custom objects named "search", which
collides with the core `search` GraphQL resolver
## Summary
- Adds a new `upgrade:1-21:backfill-datasource-to-workspace` command
that copies `dataSource.schema` into `workspace.databaseSchema` for all
active/suspended workspaces that haven't been migrated yet
- Registers the command in the 1-21 upgrade module and wires it into the
upgrade runner (runs before other 1-21 commands)
- Part of the ongoing deprecation of the `dataSource` table in favor of
storing `databaseSchema` directly on `WorkspaceEntity`
This PR completes the sections we need for the Homepage. Assets, such as
images, are still placeholder, and will be replaced as they become
available. We're still waiting on Lottie and 3d asset files from the
design team.
Database batch events (update / insert / soft-delete / hard-delete) were
building recordsBefore / recordsAfter after formatResult ran twice: once
inside WorkspaceSelectQueryBuilder.getMany() / getOne(), and again in
the CUD query builders. On the second pass, already-shaped composite
fields (e.g. emails) went through formatFieldMetadataValue and kept the
same object references as the live TypeORM row, so recordsBefore could
change when the entity was updated—breaking workflow triggers and diffs.
Fixes: #18943
Follow-up pr: #19001
## Summary:
- Timeline activity shows file upload history, but deleted files had no
signed URL and were still rendered as clickable — clicking did nothing
- grab the fileId from properties.diff.after, look it up in the current
record's files field: if present, use live signed URL; if absent, mark
as deleted
- Deleted file chips show line-through label, not-allowed cursor, and
"File no longer exists" tooltip on hover
https://github.com/user-attachments/assets/5df6a675-0003-4fd1-ad57-a07e4338923f
---------
Co-authored-by: Etienne <45695613+etiennejouan@users.noreply.github.com>
## Context
Checked in the codebase where we are trying to rollback a non-active
transaction.
packages/twenty-server/src/engine/core-modules/auth/services/sign-in-up.service.ts
seemed to be the only place where it happens.
We could enforce this with a lint rule in the future 🤔
## Overview
Add resumable stream support for agent chat to allow clients to
reconnect and resume streaming responses if the connection is
interrupted (e.g., during page refresh).
## Changes
### Backend (Twenty Server)
- Add `activeStreamId` column to `AgentChatThreadEntity` to track
ongoing streams
- Create `AgentChatResumableStreamService` to manage Redis-backed
resumable streams using the `resumable-stream` library with ioredis
- Extend `AgentChatController` with:
- `GET /:threadId/stream` endpoint to resume an existing stream
- `DELETE /:threadId/stream` endpoint to stop an active stream
- Update `AgentChatStreamingService` to store streams in Redis and track
active stream IDs
- Add `resumable-stream@^2.2.12` dependency to package.json
### Frontend (Twenty Front)
- Update `useAgentChat` hook to:
- Use a persistent transport with `prepareReconnectToStreamRequest` for
resumable streams
- Export `resumeStream` function from useChat
- Add `handleStop` callback to clear active stream on DELETE endpoint
- Use thread ID as stable message ID instead of including message count
- Add stream resumption logic in `AgentChatAiSdkStreamEffect` component
to automatically call `resumeStream()` when switching threads
## Database Migration
New migration `1774003611071-add-active-stream-id-to-agent-chat-thread`
adds the `activeStreamId` column to store the current resumable stream
identifier.
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
#### Direct Execution __typename & null backfill
##### __typename filling
The direct execution path now correctly derives __typename at every
level of the GraphQL response:
Connection types — CompanyConnection, CompanyEdge, Company, PageInfo
GroupBy types — TaskGroupByConnection (was incorrectly producing
TaskConnection)
Composite fields — Links, FullName, Currency, etc. handled by a
dedicated formatter (was inheriting the parent object's typename)
Previously, __typename was derived from the object's universal
identifier (a UUID), producing broken values like
20202020B3744779A56180086Cb2E17FConnection.
##### Null backfill
Selected fields missing from the resolver result are backfilled with
null, matching the standard Yoga schema behavior.
##### Integration test
A new test runs the same findMany query (with __typename at all
structural levels) through both paths — standard Yoga schema and direct
execution — and asserts identical output via toStrictEqual.
# Introduction
Improved the ci assertions to be sure the default logic function worked
fully
## what's next
About to introduce a s3 + lambda e2e test to cover the lambda driver too
in addition of the local driver
## Summary
- Deletes the entire `1-19` upgrade version command directory (7 files,
~1500 lines) and removes all references from the upgrade module and
command runner.
- Removes `IS_NAVIGATION_MENU_ITEM_ENABLED` and
`IS_NAVIGATION_MENU_ITEM_EDITING_ENABLED` feature flags from the enum,
seed data, generated schema files, and test mocks. These flags had no
remaining feature-gated code — navigation menu items are now always
enabled.
## Summary
- Fix stale `ObjectMetadataItems` GraphQL response cache after field
creation by using `request.workspaceMetadataVersion` (sourced from
Redis) instead of `workspace.metadataVersion` (from the potentially
stale CoreEntityCacheService)
- Make the E2E kanban view test selector more robust with a regex match
## Root Cause
The `useCachedMetadata` GraphQL plugin keys cached responses using
`workspace.metadataVersion` from the `CoreEntityCacheService`. When a
field is created:
1. The migration runner increments `metadataVersion` in DB and Redis
2. But the `CoreEntityCacheService` for `WorkspaceEntity` is **not**
invalidated
3. So `request.workspace.metadataVersion` still has the old version
4. The cache key resolves to the old cached response
5. The frontend gets stale metadata without the newly created field
This breaks E2E tests (and likely affects users) - after creating a
custom field, the metadata isn't visible until the workspace entity
cache refreshes.
## Fix
Use `request.workspaceMetadataVersion` (populated from Redis by the
middleware, always up-to-date) as the primary version for cache keys,
falling back to the entity cache version.
## Test plan
- [ ] E2E `create-kanban-view` tests should pass (creating a Select
field and immediately using it in a Kanban view)
- [ ] Verify `ObjectMetadataItems` returns fresh data after field
creation (no stale cache)
Made with [Cursor](https://cursor.com)
Automated daily sync of `ai-providers.json` from
[models.dev](https://models.dev).
This PR updates pricing, context windows, and model availability based
on the latest data.
New models meeting inclusion criteria (tool calling, pricing data,
context limits) are added automatically.
Deprecated models are detected based on cost-efficiency within the same
model family.
**Please review before merging** — verify no critical models were
incorrectly deprecated.
Co-authored-by: FelixMalfait <6399865+FelixMalfait@users.noreply.github.com>
## Summary
- Removes `IS_ROW_LEVEL_PERMISSION_PREDICATES_ENABLED` feature flag,
making row-level permission predicates always enabled. Removes
early-return guards from query builders (select, update, insert) and the
shared utility, the public feature flag metadata entry, and
`updateFeatureFlag` calls from integration tests.
- Removes `IS_DATE_TIME_WHOLE_DAY_FILTER_ENABLED` feature flag, making
whole-day datetime filtering always enabled. Simplifies filter input
components and hooks to always use date-only format for `IS` operand on
`DATE_TIME` fields.
- Cleans up enum definitions, seed data, generated schema files, and
test mocks for both flags.
## Summary
- Remove the `IS_APPLICATION_ENABLED` feature flag — application
features are now always enabled
- Remove `@RequireFeatureFlag` decorators and `FeatureFlagGuard` from 7
application resolvers (registration, development, manifest, install,
upgrade, marketplace, oauth)
- Remove frontend feature flag checks: settings navigation visibility,
route protection wrapper, side panel widget type gating, and role
permission flag filtering
- Delete the integration test for disabled-flag behavior
- Clean up seed data, test mocks, and generated schema files
## Summary
- Remove the `IS_DASHBOARD_V2_ENABLED` feature flag from the codebase
- Dashboard V2 features (gauge charts, line charts, pie charts) are now
always enabled
- Remove the validator gate that blocked gauge chart creation/update
without the flag
- Clean up all related code: seed data, dev-seeder service, widget
seeds, and test mocks
## Summary
- Remove 3 completed migration feature flags: `IS_ATTACHMENT_MIGRATED`,
`IS_NOTE_TARGET_MIGRATED`, `IS_TASK_TARGET_MIGRATED` — these were
already enabled by default for all new workspaces via
`DEFAULT_FEATURE_FLAGS`
- Delete all upgrade command directories for versions <= 1.18 (`1-16/`,
`1-17/`, `1-18/`) along with their module registrations, removing ~6,600
lines of dead migration code
- Simplify frontend utility functions
(`getActivityTargetObjectFieldIdName`, `getActivityTargetsFilter`,
`getActivityTargetFieldNameForObject`,
`generateActivityTargetMorphFieldKeys`,
`findActivitiesOperationSignatureFactory`) by removing the
`isMorphRelation` parameter and always using the morph relation path
- Remove feature flag checks from 7 frontend hooks/components that were
gating attachment and activity target behavior behind the removed flags
- Simplify `buildDefaultRelationFlatFieldMetadatasForCustomObject`
server util to always treat attachment, noteTarget, and taskTarget as
morph relations without checking feature flags
Automated daily sync of `ai-providers.json` from
[models.dev](https://models.dev).
This PR updates pricing, context windows, and model availability based
on the latest data.
New models meeting inclusion criteria (tool calling, pricing data,
context limits) are added automatically.
Deprecated models are detected based on cost-efficiency within the same
model family.
**Please review before merging** — verify no critical models were
incorrectly deprecated.
Co-authored-by: FelixMalfait <6399865+FelixMalfait@users.noreply.github.com>
## Summary
- **Fix settings/usage page crash**: The `GraphWidgetLineChart`
component used on `settings/usage` was crashing with "Instance id is not
provided and cannot be found in context" because it requires
`WidgetComponentInstanceContext` (for tooltip/crosshair component
states) which is only provided inside the widget system. Wraps the
standalone chart usages with the required context provider.
- **Avoid mounting `GraphWidgetLegend` when hidden**: The legend
component calls `useIsPageLayoutInEditMode()` which requires
`PageLayoutEditModeProviderContext` — another context only available
inside the widget system. Since the settings page passes
`showLegend={false}`, the fix conditionally unmounts the legend instead
of always mounting it with a `show` prop. Applied consistently across
all four chart types (line, bar, pie, gauge).
- **Add ClickHouse usage event seeds**: Generates ~400 realistic
`usageEvent` rows spanning the past 35 days with weighted user activity,
weekday/weekend patterns, and gradual ramp-up. Enables developers to see
the usage analytics page with data locally.
## Test plan
- [ ] Navigate to `settings/usage` — page should render without errors
- [ ] Verify the daily usage line chart displays correctly
- [ ] Navigate to a user detail page from the usage list
- [ ] Verify the user detail chart renders without errors
- [ ] Run `npx nx clickhouse:seed twenty-server` and confirm usage
events are seeded
- [ ] Verify chart legend still works correctly on dashboard widgets (no
regression)
Made with [Cursor](https://cursor.com)
## Summary
- Starts deprecation of the `core.dataSource` table by introducing a
dual-write system: `DataSourceService.createDataSourceMetadata` now
writes to both `core.dataSource` and `core.workspace.databaseSchema`
- Migrates read sites (`WorkspaceDataSourceService.checkSchemaExists`,
`WorkspaceSchemaFactory`, `MiddlewareService`,
`WorkspacesMigrationCommandRunner`) to read from
`workspace.databaseSchema` instead of querying the `dataSource` table
- Removes the unused `databaseUrl` field from `WorkspaceEntity` and
drops the column via migration
- Adds a 1.20 upgrade command to backfill `workspace.databaseSchema`
from `dataSource.schema` for existing workspaces
## Summary
- Navigation sidebar was displaying "Notes" instead of "All Notes" for
INDEX views
- `getNavigationMenuItemLabel` had a special case for INDEX views that
returned `objectMetadataItem.labelPlural` instead of the
already-resolved `view.name`
- Since `viewsSelector` already resolves `{objectLabelPlural}` templates
via `resolveViewNamePlaceholders`, the INDEX special case was redundant
and incorrect — removed it from both `getNavigationMenuItemLabel` and
`getViewNavigationMenuItemLabel`
- Added unit tests covering all navigation menu item type branches
<img width="222" height="479" alt="image"
src="https://github.com/user-attachments/assets/de6f92b1-e0c2-445a-a145-cf2820434af7"
/>
## Summary
### Cache invalidation fix
- After migrating object/field permissions to syncable entities (#18609,
#18751, #18567), changes to `flatObjectPermissionMaps`,
`flatFieldPermissionMaps`, or `flatPermissionFlagMaps` no longer
triggered `rolesPermissions` cache invalidation
- This caused stale permission data to be served, leading to flaky
`permissions-on-relations` integration tests and potentially incorrect
permission enforcement in production after object permission upserts
- Adds the three permission-related flat map keys to the condition that
triggers `rolesPermissions` cache recomputation in
`WorkspaceMigrationRunnerService.getLegacyCacheInvalidationPromises`
- Clears memoizer after recomputation to prevent concurrent
`getOrRecompute` calls from caching stale data
### Docker Hub rate limit fix
- CI service containers (postgres, redis, clickhouse) and `docker
run`/`docker build` steps were pulling from Docker Hub
**unauthenticated**, hitting the 100-pull-per-6-hour rate limit on
shared GitHub-hosted runner IPs
- Adds `credentials` blocks to all service container definitions and
`docker/login-action` steps before `docker run`/`docker compose`
commands
- Uses `vars.DOCKERHUB_USERNAME` + `secrets.DOCKERHUB_PASSWORD`
(matching the existing twenty-infra convention)
- Affected workflows: ci-server, ci-merge-queue, ci-breaking-changes,
ci-zapier, ci-sdk, ci-create-app-e2e, ci-website,
ci-test-docker-compose, preview-env-keepalive, spawn-twenty-docker-image
action
## Summary
- Limits workspace creation to 5 workspaces per server when no valid
enterprise key is configured
- Enterprise key validity is checked first (synchronous, in-memory
cached) to avoid unnecessary DB queries on enterprise deployments
- Adds 4 unit tests covering: limit enforcement, enterprise key bypass,
below-limit creation, and performance (no enterprise check when
workspace count is zero)
## Test plan
- [x] All 11 unit tests pass (8 existing + 3 new behavioral + 1
performance assertion)
- [ ] Manual: verify workspace creation blocked at limit=5 without
enterprise key
- [ ] Manual: verify workspace creation allowed beyond limit with valid
enterprise key
- [ ] Manual: verify first workspace (bootstrap) creation is unaffected
Made with [Cursor](https://cursor.com)
## Summary
- Fixes workflow manual trigger command menu items losing their
`availabilityObjectMetadataId` during the 1-20 backfill upgrade command
- The migration runner's `transpileUniversalActionToFlatAction` resolves
`availabilityObjectMetadataId` **from**
`availabilityObjectMetadataUniversalIdentifier`, overwriting the
original value. The backfill was hardcoding the universal identifier to
`null`, causing all `RECORD_SELECTION` items to end up with `NULL`
`availabilityObjectMetadataId` after persistence.
- Now properly resolves `availabilityObjectMetadataUniversalIdentifier`
from `flatObjectMetadataMaps.universalIdentifierById` so the runner can
correctly derive the FK during INSERT.
## Motivations
A lot of self hosters hands up using the `yarn database:migrated:prod`
either manually or through AI assisted debug while they try to upgrade
an instance while their workspace is still blocked in a previous one
Leading to their whole database permanent corruption
## What happened
Replaced the direct call the the typeorm cli to a command calling it
programmatically, adding a layer of security in case a workspace seems
to be blocked in a previous version than the one just before the one
being installed ( e.g 1.0 when you try to upgrade from 1.1 to 1.2 )
For our cloud we still need a way to bypass this security explaining the
-f flag
## Remark
Centralized this logic and refactored creating new services
`WorkspaceVersionService` and `CoreEngineVersionService` that will
become useful for the upcoming upgrade refactor
Related to https://github.com/twentyhq/twenty-infra/pull/529
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
Co-authored-by: Charles Bochet <charles@twenty.com>
This PR contains Menu, Hero, TrustedBy, Problem, ThreeCards and Footer
sections of the new website.
Most components in there match the Figma designs, except for two things.
- Zoom levels on 3D illustrations from Endless Tools.
- Menu needs to have the same color as Hero - it's not happening at the
moment since Menu is in the layout, not nested inside pages or Hero.
Images are placeholders (same as Figma).
Removed all @hello-pangea/dnd imports from the navigation-menu-item/
module by replacing the type bridge layer with a native
NavigationMenuItemDropResult type. The dnd-kit events were already
powering all DnD — hello-pangea was only used as a type contract and one
dead <Droppable> component.
3 files deleted (dead <Droppable> wrapper, DROP_RESULT_OPTIONS shim,
toDropResult bridge), 4 files edited (handler signatures simplified,
bridge removed from orchestrator, utility type narrowed), 1 type
created. Zero behavioral changes, typecheck and tests pass.
Fixes: #18943
## Problem
Two bugs related to the Files field:
1. **File loss on click outside**: When `MultiItemFieldInput` was not in
edit mode (input hidden), clicking outside would still call
`validateInputAndComputeUpdatedItems()` with an empty `inputValue` and
`itemToEditIndex = 0`, causing the first file to be silently deleted.
<img width="624" height="332" alt="image"
src="https://github.com/user-attachments/assets/657aff2c-e497-4685-b8b7-fa22aba772f8"
/>
2. **Unsigned file URLs in timeline activity**: FileId, FileName stored
in `timelineActivity.properties.diff` (before/after values) were not
being signed (timeActivity.properties is stored as json in database),
making them inaccessible from the
frontend.
<img width="1092" height="103" alt="image"
src="https://github.com/user-attachments/assets/23d83ea3-4eb9-41ef-a99c-c4515d1cfb35"
/>
## Reproduction
https://github.com/user-attachments/assets/e75b842b-5cbb-46e8-a923-ac9df62deb98
## Changes
- `MultiItemFieldInput.tsx`: Wrap the validate + onChange logic in `if
(isInputDisplayed)` so it only runs when the input is actually open.
- `timeline-activity-query-result-getter.handler.ts`: New handler that
iterates over `properties.diff` fields and signs any file arrays found
in `before`/`after` values (call same method:
`fileUrlService.signFileByIdUrl` as table field view
`FilesFieldQueryResultGetterHandler`.
- `common-result-getters.service.ts`: Register the new handler for
`timelineActivity`.
## After
https://github.com/user-attachments/assets/368c7be1-3101-43a2-ac93-fe1b0d8a7a37
## Summary
- Removes two remaining direct `cookieStorage.setItem('tokenPair', ...)`
calls that were overwriting the Jotai-managed cookie (180-day expiry)
with a session cookie (no expiry) during **token renewal**
- Followup to #18795 which fixed the same issue in `handleSetAuthTokens`
but missed the renewal code paths
## Root cause
Two token renewal paths still had direct cookie writes without
`expires`:
1. **`apollo.factory.ts`** — `attemptTokenRenewal()` fires on every
`UNAUTHENTICATED` GraphQL error after a successful token refresh
2. **`useAgentChat.ts`** — `retryFetchWithRenewedToken()` fires on 401
from the AI chat endpoint
Both called `cookieStorage.setItem('tokenPair', JSON.stringify(tokens))`
without an `expires` attribute, creating a session cookie that overwrote
the Jotai-managed one. This is why the bug was **intermittent after
#18795**: it only appeared after a token renewal, not on fresh login.
The `onTokenPairChange` / `setTokenPair` calls already write through
Jotai's `atomWithStorage` → `createJotaiCookieStorage`, which always
sets `expires: 180 days`.
## Test plan
- Log in to the app
- Wait for a token renewal to occur (or force one by letting the access
token expire)
- Inspect the `tokenPair` cookie in DevTools → Application → Cookies
- Verify the cookie retains an expiration date ~180 days from now (not
"Session")
- Close and reopen the browser — confirm you remain logged in
Made with [Cursor](https://cursor.com)
## Bug Description
When reordering stages in the Kanban board, the frontend fires all
viewGroup update mutations concurrently via Promise.all, causing race
conditions in the workspace migration runner's cache invalidation,
database contention, and a thundering herd effect that stalls the
server.
## Changes
Changed `usePerformViewGroupAPIPersist` to execute viewGroup update
mutations sequentially instead of concurrently. The `Promise.all`
pattern fired all N mutations simultaneously, each triggering a full
workspace migration runner pipeline (transaction + cache invalidation).
The sequential `for...of` loop ensures each mutation completes
(including its cache invalidation) before the next begins, eliminating
the race condition.
## Related Issue
Fixes#18865
## Testing
This fix addresses the root cause identified in the Sonarly analysis on
the issue. The concurrent mutation pattern was causing:
- PostgreSQL row-level lock contention on viewGroup rows
- Cache thundering herd from repeated invalidation/recomputation cycles
- Server stalls requiring container restarts
The sequential approach ensures proper ordering and prevents these race
conditions.
---------
Co-authored-by: Rayan <rayan@example.com>
Co-authored-by: Charles Bochet <charles@twenty.com>
## Summary
- Gate the `FindManyCommandMenuItems` GraphQL query behind the
`IS_COMMAND_MENU_ITEM_ENABLED` feature flag on the frontend, preventing
an uncaught error when the flag is not enabled for a workspace
- Remove `IS_CONNECTED_ACCOUNT_MIGRATED` from the default feature flags
list
Automated daily sync of `ai-providers.json` from
[models.dev](https://models.dev).
This PR updates pricing, context windows, and model availability based
on the latest data.
New models meeting inclusion criteria (tool calling, pricing data,
context limits) are added automatically.
Deprecated models are detected based on cost-efficiency within the same
model family.
**Please review before merging** — verify no critical models were
incorrectly deprecated.
Co-authored-by: FelixMalfait <6399865+FelixMalfait@users.noreply.github.com>
- Remove the label identifier field for all standard objects as it's a
first-class citizen that is displayed specifically in the app; it
doesn't make a lot of sense to display it in the Fields widgets
- Disable logic that made the label identifier required and in first
position
- Add all fields for all standard objects in record page layout view
fields
- Do not include position and ts vector fields in custom objects
> [!IMPORTANT]
> The command will create Field widgets for all relations. It is
consistent to the way the frontend dynamically generates them as of
today. We will have to decide which relations we pin as individual Field
widgets before the release. (This will likely land in this command or in
another one.)
In navbar edit mode, selecting an item for edit stored
selectedNavigationMenuItemIdInEditModeState (and related
pending-insertion state). Closing the side panel did not reset those
atoms, so the nav item stayed visually selected. Reset both when the
panel closes so the highlight matches the closed panel; reopening in
edit mode then starts from the generic “new item” entry unless the user
picks an item again.
## Summary
- align the forbidden field "Not shared" chip with small chip dimensions
in the object table
- use the shared small border radius token instead of a hardcoded value
- add `overflow: hidden` and `user-select: none` to match chip behavior
more closely
## Testing
- Not run (not requested)
Currently, when trying to create a second step agent, we get the error
agent already exists because the name is using workflow id. Using step
id instead.
## Summary
- Adds an `authType` field (`'api_key' | 'access_key' | 'iam_role'`) to
AI provider config
- Providers like Amazon Bedrock that authenticate via IAM role (instance
profile) can now be registered without explicit API keys or access keys
- Backend: `isProviderConfigured()` checks `apiKey || accessKeyId ||
authType`
- Frontend admin panel: shows green "Configured" badge and "IAM role"
description for providers with `authType: "iam_role"`
- Provider detail page shows "IAM role (instance profile)" in the
credentials row
## Companion PR
- twentyhq/twenty-infra#528 — patches `authType: "iam_role"` into
dev/staging Bedrock catalogs
## Changed files
- **Backend**: new `AiProviderAuthType` type, `isProviderConfigured`
util, updated registry + resolver
- **Frontend**: new `AiProviderAuthType` type, updated provider list
card + detail page
## Test plan
- [ ] Deploy with a Bedrock catalog that includes `"authType":
"iam_role"` — verify Bedrock shows "Configured" in admin AI panel
- [ ] Verify OpenAI/Anthropic with `apiKey` still show "Configured"
- [ ] Verify a provider with no credentials and no `authType` still
shows "No credentials"
Made with [Cursor](https://cursor.com)
This PR solves multiple problems :
- An infinite loop that was happening on board with initial and fetch
more queries
- Warning messages that get triggered when we drag and drop multiple
times in a row
- An attempt to fix an existing error in Sentry, that couldn't be
reproduced for now.
Fixes https://github.com/twentyhq/twenty/issues/18949
## Problem
- Creating a record from filters with DATE_TIME fields produced empty
objects instead of dates — `isPlainObject` from `twenty-shared` treats
`Date` instances as plain objects, so `mergeCompositeValues` spread them
into `{}`
- `buildRecordInputFromFilter` applied composite merge logic to all
field types indiscriminately, including primitives, dates, and strings
- DATE_TIME "is before" filters produced exact boundary values instead
of subtracting a minute
## Fix
- Composite and non-composite fields are now handled in separate
branches — `buildRecordInputFromFilter` uses `isCompositeFieldType` to
decide whether to merge or assign directly
- `mergeCompositeValues` extracted to its own file with a properly typed
signature (`Record<string, unknown>`) — no more runtime type guessing
- DATE_TIME fields with `IS_BEFORE` operand subtract one minute using
`subMinutes` from `date-fns`
- 13 unit tests for `mergeCompositeValues` covering all composite field
types (currency, address, full name, links, emails, phones) and
successive sub-field accumulation
<img width="1946" height="792" alt="image"
src="https://github.com/user-attachments/assets/d0abf62b-85d4-4f5f-b6dc-f2cc1c691f7e"
/>
Avoid re-exporting twenty-ui icons bundle that are massive ~4MB
As discussed with @charlesBochet the problem should rather be treated at
twenty-ui level at some point, that's quite a quick workaround in order
to avoid overloading the twenty-sdk build size
When time comes, where twenty-ui is mature enough to get published we
will work on its bundle size
The Kanban view builds a query in two layers:
- Inner query — selects actual records from the table (has all the
permission context)
- Outer query — wraps the inner query's raw SQL string to do
grouping/pagination
The problem: the inner query's SQL is copied out as a plain string
before RLS predicates are added to it. RLS predicates are normally added
lazily when you execute the query, but here the execution happens on the
outer query — which doesn't know about the entity or its RLS rules.
So RLS predicates are never applied anywhere.
The fix: explicitly apply RLS predicates to the inner query before its
SQL is extracted.
Additonnaly, fixed a temporal issue in Datetime pickers.
## Summary
### Externalize `twenty-client-sdk` from `twenty-sdk`
Previously, `twenty-client-sdk` was listed as a `devDependency` of
`twenty-sdk`, which caused Vite to bundle it inline into the dist
output. This meant end-user apps had two copies of `twenty-client-sdk`:
one hidden inside `twenty-sdk`'s bundle, and one installed explicitly in
their `node_modules`. These copies could drift apart since they weren't
guaranteed to be the same version.
**Change:** Moved `twenty-client-sdk` from `devDependencies` to
`dependencies` in `twenty-sdk/package.json`. Vite's `external` function
now recognizes it and keeps it as an external `require`/`import` in the
dist output. End users get a single deduplicated copy resolved by their
package manager.
### Externalize `twenty-sdk` from `create-twenty-app`
Similarly, `create-twenty-app` had `twenty-sdk` as a `devDependency`
(bundled inline). After refactoring `create-twenty-app` to
programmatically import operations from `twenty-sdk` (instead of
shelling out via `execSync`), it became a proper runtime dependency.
**Change:** Moved `twenty-sdk` from `devDependencies` to `dependencies`
in `create-twenty-app/package.json`.
### Switch E2E CI to `yarn npm publish`
The `workspace:*` protocol in `dependencies` is a Yarn-specific feature.
`npm publish` publishes it as-is (which breaks for consumers), while
`yarn npm publish` automatically replaces `workspace:*` with the
resolved version at publish time (e.g., `workspace:*` becomes `=1.2.3`).
**Change:** Replaced `npm publish` with `yarn npm publish` in
`.github/workflows/ci-create-app-e2e.yaml`.
### Replace `execSync` with programmatic SDK calls in
`create-twenty-app`
`create-twenty-app` was shelling out to `yarn twenty remote add` and
`yarn twenty server start` via `execSync`, which assumed the `twenty`
binary was already installed in the scaffolded app. This was fragile and
created an implicit circular dependency.
**Changes:**
- Replaced `execSync('yarn twenty remote add ...')` with a direct call
to `authLoginOAuth()` from `twenty-sdk/cli`
- Replaced `execSync('yarn twenty server start')` with a direct call to
`serverStart()` from `twenty-sdk/cli`
- Deleted the duplicated `setup-local-instance.ts` from
`create-twenty-app`
### Centralize `serverStart` as a dedicated operation
The Docker server start logic was previously inline in the `server
start` CLI command handler (`server.ts`), and `setup-local-instance.ts`
was shelling out to `yarn twenty server start` to invoke it -- meaning
`twenty-sdk` was calling itself via a child process.
**Changes:**
- Extracted the Docker container management logic into a new
`serverStart` operation (`cli/operations/server-start.ts`)
- Merged the detect-or-start flow from `setup-local-instance.ts` into
`serverStart` (detect across multiple ports, start Docker if needed,
poll for health)
- Deleted `setup-local-instance.ts` from `twenty-sdk`
- Added `onProgress` callback (consistent with other operations like
`appBuild`) instead of direct `console.log` calls
- Both the `server start` CLI command and `create-twenty-app` now call
`serverStart()` programmatically
related to https://github.com/twentyhq/twenty-infra/pull/525
## Summary
- Fixes intermittent `INVALID_QUERY_INPUT` errors (152 occurrences / 2
users in Sentry) caused by the single record picker sending `{ id: { in:
[] } }` to the Search query when no records are selected
- The `skip` guard is logically correct but Apollo Client v4 can briefly
fire queries during React 18 render transitions before processing the
skip flag
- Makes `selectedIdsFilter` conditional on `hasSelectedIds`, so
variables contain a safe empty filter `{}` regardless of skip behavior —
matching the existing defensive pattern used by the third query's
`notFilter`
## Test plan
- [ ] Open a relation field picker (e.g., Person on an Opportunity) with
no existing value — search should load without errors
- [ ] Open a relation field picker with an existing value — selected
record should appear and search should work
- [ ] Clear a selected relation and reopen the picker — no console
errors
Made with [Cursor](https://cursor.com)
## Summary
- Adds `isSearchable: true` to the workflow standard object definition
so new workspaces get searchable workflows automatically
- Adds a `upgrade:1-20:make-workflow-searchable` migration command that
flips the `isSearchable` flag on the `objectMetadata` row for existing
workspaces, with proper cache invalidation and metadata version
increment
- Registers the command in the 1-20 upgrade module and the
`upgrade.command.ts` orchestrator
The `searchVector` stored generated column already exists on the
workflow table, so no data backfill is needed — this is purely a
metadata flag change that makes the search service include workflows in
results.
## Test plan
- [x] `--dry-run` logs what it would do without making changes
- [x] Actual run updates both workspaces and invalidates caches
- [x] Idempotent: re-running skips already-searchable workspaces
- [x] Typecheck passes
- [x] Lint passes on changed files
Made with [Cursor](https://cursor.com)
Fixes https://github.com/twentyhq/twenty/issues/18148
## Problem
- Scrolling down in the "no value" kanban column never loads additional
records when it's the only column that needs pagination
- `useTriggerRecordBoardFetchMore` — `.filter(isDefined)` removes `null`
from the list of group values to fetch, and `null` is the value used by
"no value" columns, causing early exit
- `useTriggerRecordBoardFetchMore` — the inline `{ in: [...values] }`
filter cannot express a NULL match, so even without the early exit the
query would be wrong
## Fix
- "No value" column now triggers pagination correctly —
`useTriggerRecordBoardFetchMore` (removed `.filter(isDefined)` so `null`
values pass through)
- Query filter correctly matches NULL field values —
`useTriggerRecordBoardFetchMore` (replaced inline `{ in: [...] }` with
`computeRecordGroupOptionsFilter` which generates `{ is: 'NULL' }` for
null values and `{ in: [...] }` for non-null values)
## Summary
- Chrome 142 rejects `var()` references in SVG `width`/`height`
attributes, causing icons to fall back to `100%` size
- Replaced `themeCssVariables.icon.size.*` /
`themeCssVariables.icon.stroke.*` (raw `var()` strings) with resolved
`theme.icon.size.*` / `theme.icon.stroke.*` (numeric values from
`ThemeContext`) when passed as icon component props
- Affects 3 navigation menu item components: `NavigationMenuItemFolder`,
`NavigationMenuItemLinkDisplay`, `LinkIconWithLinkOverlay`
## Test plan
- [ ] Verify navigation drawer folder chevron icons render at correct
size
- [ ] Verify navigation drawer link arrow icons render at correct size
- [ ] Verify link overlay icons render at correct size
- [ ] Test in Chrome 142+ to confirm the fix
- [ ] Test in Firefox/Safari to confirm no regression
When an If/Else branch skips an Iterator step (because the branch wasn't
taken), the executor incorrectly entered the iterator's loop body. This
happened because getNextStepIdsToExecute checked !hasProcessedAllItems
on an undefined result, which evaluated to true, causing it to return
initialLoopStepIds instead of the post-loop nextStepIds.
Fix: Add a !executedStepOutput.shouldSkipStepExecution guard to the
iterator condition in getNextStepIdsToExecute, consistent with the
existing shouldFailSafely guard.
Clear the SSE client before swapping auth tokens during impersonation,
preventing a userWorkspaceId mismatch between the existing event stream
(created under the admin's identity) and the new impersonation token.
Treat NOT_AUTHORIZED event stream errors as recoverable (destroy +
recreate), matching the existing behavior for
EVENT_STREAM_DOES_NOT_EXIST and EVENT_STREAM_ALREADY_EXISTS.
Introduce a new feature flag for the record table widget, enabling
conditional rendering and state management based on its status. Update
related components and tests accordingly.
---------
Co-authored-by: Charles Bochet <charlesBochet@users.noreply.github.com>
- Fix issue with name that must be first in the view field list
- Improve dry run and logs of backfill page layouts command
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: prastoin <paul@twenty.com>
## Summary
Fixes the merge-queue E2E failures that started after #18912 landed.
`filterReadableActiveObjectMetadataItems` (introduced in #18912) treats
a **missing** entry in the permissions map as "deny read". The previous
helper (`getObjectPermissionsFromMapByObjectMetadataId`) treated it as
**"allow read"** via a `?? { canReadObjectRecords: true, … }` fallback.
Because the permissions map is empty on initial render (before the API
round-trip completes), **every object was temporarily filtered out**.
This made `useDefaultHomePagePath` return the settings-profile path,
triggering a redirect race that broke Settings navigation in three E2E
tests:
- `signup_invite_email.spec.ts` — timed out waiting for
`getByRole('link', { name: 'Members' })`
- `create-kanban-view.spec.ts` — timed out waiting for
`getByRole('link', { name: 'Data model' })`
- `workflow-creation.spec.ts` — timed out waiting for sidebar Workflows
button
**Evidence from CI:**
- Last passing merge-queue run: base `13ea3ec75c` (before #18912)
- First failing merge-queue run: base `5f0d6553f6` (#18912)
- All individual PR CI runs continued to pass (they don't rebase on
latest main)
- Screenshot artifact shows the app stuck on the main page — the
Settings drawer never opened
## Fix
When an object has no entry in the permissions map, default to allowing
read (matching the old behavior). An empty map means permissions haven't
loaded yet, not that the user lacks access.
```diff
objectMetadataItems.filter((objectMetadataItem) => {
+ if (!objectMetadataItem.isActive) {
+ return false;
+ }
+
const objectPermissions =
objectPermissionsByObjectMetadataId[objectMetadataItem.id];
-
- return (
- isDefined(objectPermissions) &&
- objectPermissions.canReadObjectRecords &&
- objectMetadataItem.isActive
- );
+
+ if (!isDefined(objectPermissions)) {
+ return true;
+ }
+
+ return objectPermissions.canReadObjectRecords;
});
```
## Summary
- When `STORAGE_S3_PRESIGNED_URL_BASE` is configured, the file
controller returns a **302 redirect** to a presigned S3 URL instead of
proxying every byte through the server. This eliminates server bandwidth
and CPU overhead for S3-backed deployments.
- For local storage or S3 without a public endpoint, behavior is
unchanged (stream + pipe with security headers).
- Added `getPresignedUrl` to the `StorageDriver` interface (required
method returning `string | null`), with implementations in S3Driver
(uses a separate presign client with the public endpoint), LocalDriver
(returns `null`), and ValidatedStorageDriver (path traversal protection
+ delegation).
- Added a unified `getFileResponseById` method in `FileService` that
performs a single DB lookup and returns either a redirect URL or a
stream, avoiding double lookups.
- Extracted `getContentDisposition` from the header util so both the
proxy path and presigned URL path share the same inline/attachment
allowlist.
- Added MinIO service to `docker-compose.dev.yml` (optional `s3`
profile) for local S3 testing.
- Documented S3 presigned URL setup, CORS, and `nosniff` requirements in
the self-hosting docs.
## Test plan
- [x] All 63 unit tests pass across 5 test suites (util, S3 driver,
validated driver, file storage service, controller)
- [x] `npx nx typecheck twenty-server` passes
- [ ] Manual E2E test with MinIO: `docker compose --profile s3 up -d`,
configure S3 env vars, verify `curl -I` returns 302 with `Location`
header pointing to MinIO
- [ ] Verify local storage (no `STORAGE_S3_PRESIGNED_URL_BASE`) still
streams files with 200 + security headers
- [ ] Verify public assets endpoint still proxies (no redirect)
Made with [Cursor](https://cursor.com)
## Summary
- Adds an object type filter dropdown to the side panel, allowing users
to scope record search results to a specific object type (e.g. People,
Companies, Opportunities)
- The filter appears as a funnel icon next to the search input in both
the global search (SearchRecords page) and the "Pick a record" sidebar
flow
- Replaces the AI sparkles button on the search records page with the
filter icon
- Uses colored icons matching the navigation menu style
(NavigationMenuItemStyleIcon + getStandardObjectIconColor)
## Test plan
- [ ] Open the side panel, type something to enter SearchRecords mode,
verify the filter icon appears
- [ ] Click the filter icon, verify the dropdown opens with "Object"
header, search input, "All Objects" and individual object types with
colored icons
- [ ] Select an object type, verify only records of that type appear in
search results
- [ ] Select "All Objects", verify all record types appear again
- [ ] Verify the filter icon turns blue when a filter is active
- [ ] In layout customization mode, add a new sidebar item > Record,
verify the filter dropdown also works there
- [ ] Close and reopen the side panel, verify the filter resets to "All
Objects"
- [ ] Verify the AI sparkles button no longer appears on the command
menu root page
- [ ] Verify the AI edit icon still appears on Ask AI pages
Made with [Cursor](https://cursor.com)
## Summary
- The 1-20 backfill command menu items upgrade command was mixing
standard and custom application flat entities into a single
`validateBuildAndRunWorkspaceMigration` call. Since each migration run
is tied to a single application, this split the backfill into two
separate runs: standard items under `twentyStandardFlatApplication` and
workflow trigger items under `workspaceCustomFlatApplication`.
## Summary
- Standard object and field metadata labels were plain strings instead
of being wrapped in Lingui `msg` template literals, which prevented
extraction into translation catalogs. This caused "Uncompiled message
detected!" warnings at runtime.
- The bug was introduced when the decorator-based approach
(`@WorkspaceEntity({ labelSingular: msg`...` })`) was replaced by flat
metadata builder utils with plain strings.
- Adds an `i18nLabel` helper to safely extract the message string from
`MessageDescriptor` objects.
- Re-runs `lingui extract` and `lingui compile` to update all locale PO
files and compiled catalogs.
- Adds an integration test that queries the metadata API with `x-locale`
headers to verify locale-aware label resolution.
- Includes a ClickHouse usage event writer integration test (small
unrelated fix noticed along the way).
## Test plan
- [ ] CI lint passes (prettier + oxlint)
- [ ] CI typecheck passes
- [ ] Server unit tests pass
- [ ] Integration tests pass (including new `object-metadata-i18n` test)
- [ ] No "Uncompiled message detected!" warnings for standard
object/field labels at runtime
Made with [Cursor](https://cursor.com)
see [discord
discussion](https://discord.com/channels/1130383047699738754/1486299347091198054/1486299351520383116)
## Summary
When an OAuth application token carries both `applicationId` and
`userId`/`userWorkspaceId`, the auth context now uses the **user's
role** for permissions instead of the application's `defaultRoleId`.
This fixes the case where external clients authenticating via OAuth
(e.g. a client's external AI chat) were getting the app's permissions
instead of the authenticated user's.
### What changed
- **`jwt.auth.strategy.ts`** (`validateApplicationToken`): when an
application token includes user info, also resolve `workspaceMemberId`
and `workspaceMember` from the workspace cache (same pattern as
`validateAccessToken`)
- **`workspace-auth-context.middleware.ts`** (`buildAuthContext`): when
both `application` and `user` (with
`workspaceMemberId`/`workspaceMember`) are present on the request, build
a `UserWorkspaceAuthContext` instead of
`ApplicationWorkspaceAuthContext`. Falls back to application context if
workspace member cannot be resolved.
### Places impacted by this change (no code changes, behavior changes)
These places check `isApplicationAuthContext` or resolve roles from auth
context. Since hybrid tokens (OAuth with user) now produce a
`UserWorkspaceAuthContext`, they naturally flow into the
`isUserAuthContext` branches:
| File | Impact |
|------|--------|
| `permissions.service.ts` —
`resolveRolePermissionConfigFromAuthContext` | OAuth+user now uses
user's role via `isUserAuthContext` branch instead of
`application.defaultRoleId` |
| `common-api-context-builder.service.ts` — `getObjectsPermissions` |
Same — OAuth+user falls into `isUserAuthContext` branch |
| `common-base-query-runner.service.ts` — `getRoleIdOrThrow` | Same —
OAuth+user falls into `isUserAuthContext` branch |
| `actor-from-auth-context.service.ts` — `buildActorMetadata` | Records
created via OAuth+user will show the **user's name** as actor instead of
the application's name |
| `message-find-one.post-query.hook.ts` | OAuth+user now passes the
`isUserAuthContext` check (previously would fail unless it was the
Twenty standard application) |
| `front-component.resolver.ts` | Front component tokens include
`userId` — they will now correctly use the user's role, fixing a
pre-existing permission escalation where a user could access data
through a front component's app role that exceeded their own |
| `logic-function-executor.service.ts` | **Not impacted** — only
generates tokens with `applicationId` (no `userId`) |
Issue: https://www.loom.com/share/dd48cd509f614e51829f6a5b58d41b6b
Bug: Unsetting a revoked object permission keeps it revoked
When a role has a global permission enabled (e.g.
canReadAllObjectRecords: true) but an object-level override revokes it
(canReadObjectRecords: false), clicking to remove that override had no
effect — the permission stayed revoked after save.
Root cause:
Backend (object-permission.service.ts): The nullish coalescing operator
(??) was used to fall back to the current DB value when the input didn't
provide a value. Since ?? treats both null and undefined as nullish,
sending canReadObjectRecords: null (meaning "remove override") was
coalesced to the current value (false), silently discarding the reset.
Fix:
- Backend: Replaced ?? with explicit !== undefined checks, so null is
preserved as a meaningful value (meaning "no override / inherit from
global") while undefined (field not provided) still falls back to the
current value. This also fixes the "Reset all permissions" flow which
sends null for all permission fields.
Additional frontend fix: Changed !value to value === false so that only
an explicit false cascades revocation to write permissions. Setting null
(reset to inherit) now only affects the read permission itself.
## Summary
- Migrates twenty-companion from standalone npm to the repo yarn
workspaces
- Removes package-lock.json (resolves Oneleet security finding about npm
lifecycle scripts)
- Converts npm overrides to yarn resolutions
- Updates scripts from npm run to yarn
## Test plan
- [x] Verified yarn install succeeds at root
- [x] Verified yarn start in twenty-companion launches the Electron app
- [ ] Verify Oneleet finding is resolved after merge
Logo upload during onboarding failed because it required the Twenty
Standard Application, which doesn't exist yet at that point
We only need custom app's universalIdentifier
(workspace.workspaceCustomApplicationId, available from sign up) so we
can safely remove ApplicationService dependency
https://github.com/user-attachments/assets/a18599ee-0b91-4629-ad77-2f708351449a
/closes #18829
Co-authored-by: Charles Bochet <charles@twenty.com>
## Summary
- Consolidates duplicated auth-context-to-role-ID resolution logic
(previously in
`PermissionsService.resolveRolePermissionConfigFromAuthContext` and
`CommonBaseQueryRunnerService.getRoleIdOrThrow`) into a single pure
utility function `resolveRolePermissionConfig` in the ORM layer
- The utility is synchronous and operates on cached data
(`userWorkspaceRoleMap`, `apiKeyRoleMap`) already loaded into the
workspace context — no async calls, no service dependencies
- Adds `apiKeyRoleMap` to `ORMWorkspaceContext` (it was already in the
workspace cache, just not loaded into the ORM context)
- Removes `PermissionsService` dependency from
`NavigationMenuItemRecordIdentifierService`
- Removes `UserRoleService` and `ApiKeyRoleService` injections from
`CommonBaseQueryRunnerService`
## Test plan
- [ ] Existing typecheck passes (`npx nx typecheck twenty-server`)
- [ ] Verify record identifier resolution still works for navigation
menu items (user, system, API key, and application auth contexts)
- [ ] Verify GraphQL CRUD queries still enforce correct role-based
permissions
- [ ] Verify API key authenticated requests resolve permissions
correctly
Made with [Cursor](https://cursor.com)
## 1. The `twenty-client-sdk` Package (Source of Truth)
The monorepo package at `packages/twenty-client-sdk` ships with:
- A **pre-built metadata client** (static, generated from a fixed
schema)
- A **stub core client** that throws at runtime (`CoreApiClient was not
generated...`)
- Both ESM (`.mjs`) and CJS (`.cjs`) bundles in `dist/`
- A `package.json` with proper `exports` map for
`twenty-client-sdk/core`, `twenty-client-sdk/metadata`, and
`twenty-client-sdk/generate`
## 2. Generation & Upload (Server-Side, at Migration Time)
**When**: `WorkspaceMigrationRunnerService.run()` executes after a
metadata schema change.
**What happens in `SdkClientGenerationService.generateAndStore()`**:
1. Copies the stub `twenty-client-sdk` package from the server's assets
(resolved via `SDK_CLIENT_PACKAGE_DIRNAME` — from
`dist/assets/twenty-client-sdk/` in production, or from `node_modules`
in dev)
2. Filters out `node_modules/` and `src/` during copy — only
`package.json` + `dist/` are kept (like an npm publish)
3. Calls `replaceCoreClient()` which uses `@genql/cli` to introspect the
**application-scoped** GraphQL schema and generates a real
`CoreApiClient`, then compiles it to ESM+CJS and overwrites
`dist/core.mjs` and `dist/core.cjs`
4. Archives the **entire package** (with `package.json` + `dist/`) into
`twenty-client-sdk.zip`
5. Uploads the single archive to S3 under
`FileFolder.GeneratedSdkClient`
6. Sets `isSdkLayerStale = true` on the `ApplicationEntity` in the
database
## 3. Invalidation Signal
The `isSdkLayerStale` boolean column on `ApplicationEntity` is the
invalidation mechanism:
- **Set to `true`** by `generateAndStore()` after uploading a new client
archive
- **Checked** by both logic function drivers before execution — if
`true`, they rebuild their local layer
- **Set back to `false`** by `markSdkLayerFresh()` after the driver has
successfully consumed the new archive
Default is `false` so existing applications without a generated client
aren't affected.
## 4a. Logic Functions — Local Driver
**`ensureSdkLayer()`** is called before every execution:
1. Checks if the local SDK layer directory exists AND `isSdkLayerStale`
is `false` → early return
2. Otherwise, cleans the local layer directory
3. Calls `downloadAndExtractToPackage()` which streams the zip from S3
directly to disk and extracts the full package into
`<tmpdir>/sdk/<workspaceId>-<appId>/node_modules/twenty-client-sdk/`
4. Calls `markSdkLayerFresh()` to set `isSdkLayerStale = false`
**At execution time**, `assembleNodeModules()` symlinks everything from
the deps layer's `node_modules/` **except** `twenty-client-sdk`, which
is symlinked from the SDK layer instead. This ensures the logic
function's `import ... from 'twenty-client-sdk/core'` resolves to the
generated client.
## 4b. Logic Functions — Lambda Driver
**`ensureSdkLayer()`** is called during `build()`:
1. Checks if `isSdkLayerStale` is `false` and an existing Lambda layer
ARN exists → early return
2. Otherwise, deletes all existing layer versions for this SDK layer
name
3. Calls `downloadArchiveBuffer()` to get the raw zip from S3 (no disk
extraction)
4. Calls `reprefixZipEntries()` which streams the zip entries into a
**new zip** with the path prefix
`nodejs/node_modules/twenty-client-sdk/` — this is the Lambda layer
convention path. All done in memory, no disk round-trip
5. Publishes the re-prefixed zip as a new Lambda layer via
`publishLayer()`
6. Calls `markSdkLayerFresh()`
**At function creation**, the Lambda is created with **two layers**:
`[depsLayerArn, sdkLayerArn]`. The SDK layer is listed last so it
overwrites the stub `twenty-client-sdk` from the deps layer (later
layers take precedence in Lambda's `/opt` merge).
## 5. Front Components
Front components are built by `app:build` with `twenty-client-sdk/core`
and `twenty-client-sdk/metadata` as **esbuild externals**. The stored
`.mjs` in S3 has unresolved bare import specifiers like `import {
CoreApiClient } from 'twenty-client-sdk/core'`.
SDK import resolution is split between the **frontend host** (fetching &
caching SDK modules) and the **Web Worker** (rewriting imports):
**Server endpoints**:
- `GET /rest/front-components/:id` —
`FrontComponentService.getBuiltComponentStream()` returns the **raw
`.mjs`** directly from file storage. No bundling, no SDK injection.
- `GET /rest/sdk-client/:applicationId/:moduleName` —
`SdkClientController` reads a single file (e.g. `dist/core.mjs`) from
the generated SDK archive via
`SdkClientGenerationService.readFileFromArchive()` and serves it as
JavaScript.
**Frontend host** (`FrontComponentRenderer` in `twenty-front`):
1. Queries `FindOneFrontComponent` which returns `applicationId`,
`builtComponentChecksum`, `usesSdkClient`, and `applicationTokenPair`
2. If `usesSdkClient` is `true`, renders
`FrontComponentRendererWithSdkClient` which calls the
`useApplicationSdkClient` hook
3. `useApplicationSdkClient({ applicationId, accessToken })` checks the
Jotai atom family cache for existing blob URLs. On cache miss, fetches
both SDK modules from `GET /rest/sdk-client/:applicationId/core` and
`/metadata`, creates **blob URLs** for each, and stores them in the atom
family
4. Once the blob URLs are cached, passes them as `sdkClientUrls`
(already blob URLs, not server URLs) to `SharedFrontComponentRenderer` →
`FrontComponentWorkerEffect` → worker's `render()` call via
`HostToWorkerRenderContext`
**Worker** (`remote-worker.ts` in `twenty-sdk`):
1. Fetches the raw component `.mjs` source as text
2. If `sdkClientUrls` are provided and the source contains SDK import
specifiers (`twenty-client-sdk/core`, `twenty-client-sdk/metadata`),
**rewrites** the bare specifiers to the blob URLs received from the host
(e.g. `'twenty-client-sdk/core'` → `'blob:...'`)
3. Creates a blob URL for the rewritten source and `import()`s it
4. Revokes only the component blob URL after the module is loaded — the
SDK blob URLs are owned and managed by the host's Jotai cache
This approach eliminates server-side esbuild bundling on every request,
caches SDK modules per application in the frontend, and keeps the
worker's job to a simple string rewrite.
## Summary Diagram
```
app:build (SDK)
└─ twenty-client-sdk stub (metadata=real, core=stub)
│
▼
WorkspaceMigrationRunnerService.run()
└─ SdkClientGenerationService.generateAndStore()
├─ Copy stub package (package.json + dist/)
├─ replaceCoreClient() → regenerate core.mjs/core.cjs
├─ Zip entire package → upload to S3
└─ Set isSdkLayerStale = true
│
┌────────┴────────────────────┐
▼ ▼
Logic Functions Front Components
│ │
├─ Local Driver ├─ GET /rest/sdk-client/:appId/core
│ └─ downloadAndExtract │ → core.mjs from archive
│ → symlink into │
│ node_modules ├─ Host (useApplicationSdkClient)
│ │ ├─ Fetch SDK modules
└─ Lambda Driver │ ├─ Create blob URLs
└─ downloadArchiveBuffer │ └─ Cache in Jotai atom family
→ reprefixZipEntries │
→ publish as Lambda ├─ GET /rest/front-components/:id
layer │ → raw .mjs (no bundling)
│
└─ Worker (browser)
├─ Fetch component .mjs
├─ Rewrite imports → blob URLs
└─ import() rewritten source
```
## Next PR
- Estimate perf improvement by implementing a redis caching for front
component client storage ( we don't even cache front comp initially )
- Implem frontent blob invalidation sse event from server
---------
Co-authored-by: Charles Bochet <charlesBochet@users.noreply.github.com>
## Demo
https://github.com/user-attachments/assets/584de452-544a-41f8-ae9f-4be9e9d0cd9f
## Problem
- Dashboards only supported chart widgets — tabular record data had no
inline widget type
- `RecordTable` was tightly coupled to the record index: HTML IDs, CSS
variables, and hover portals were global strings with no per-instance
scoping, so multiple tables on the same page would collide
- `updateRecordTableCSSVariable`, `RECORD_TABLE_HTML_ID`, and cell
portal IDs were hardcoded — placing two tables caused hover portals and
CSS column widths to bleed across instances
- Grid drag-select captured record UUIDs as cell IDs, producing `NaN`
layout coordinates and a full-page freeze on second widget creation
## Fix
- `RECORD_TABLE` is now a valid widget type across the full stack —
server DTOs, DB enum migration, universal config mapping, GraphQL
codegen, shared types (`RecordTableConfigurationDto`, `WidgetType`,
`addRecordTableWidgetType` migration)
- A record table widget can be placed on a dashboard and boots from a
View ID with no record index dependency —
`StandaloneRecordTableProvider` + `StandaloneRecordTableViewLoadEffect`
(wraps existing `RecordTableWithWrappers` unchanged)
- Selecting a data source auto-creates a dedicated View with up to 6
initial fields; switching source or deleting the widget cleans up the
View — `useCreateViewForRecordTableWidget` +
`useDeleteViewForRecordTableWidget`
- The settings panel exposes source, field visibility/reorder, filter
conditions, sort rules, and editable widget title —
`SidePanelPageLayoutRecordTableSettings` + sub-pages, matching chart
widget pattern
- Filters, sorts, and aggregate operations update the table in real time
but only persist to the View on explicit dashboard save —
`useSaveRecordTableWidgetsViewDataOnDashboardSave` (diff + flush on
save)
- Headers are always non-interactive (no dropdown, no cursor pointer);
columns are resizable only in edit mode; cells are non-editable in both
modes — `isRecordTableColumnHeadersReadOnlyComponentState`,
`isRecordTableColumnResizableComponentState`,
`isRecordTableCellsNonEditableComponentState` (Jotai component states)
- Hover portals and CSS column widths no longer bleed between multiple
table widgets — `getRecordTableHtmlId(tableId)`,
`getRecordTableCellId(tableId, …)`,
`updateRecordTableCSSVariable(tableId, …)` scope all DOM IDs and CSS
variables per instance
- Clicking inside a widget's content area no longer opens the settings
panel — `WidgetCardContent` stops click propagation when editable,
limiting settings-open to the card header and chrome
- Second widget creation no longer freezes the page —
`PageLayoutGridLayout` drag-select filters by `cell-` prefix to exclude
record UUIDs from grid cell detection
## Follow-up fixes
**Widget save flow**
- Saving a dashboard silently dropped record table widget changes
(column visibility, order, filters, sorts, aggregates) because widget
data save was bundled inside the layout save and only ran when layout
structure changed
- Widget data now persists independently via
`useSavePageLayoutWidgetsData`, called in all save paths (dashboard
save, record page save, layout customization save); saves are also
skipped when nothing has changed
**Drag-and-drop / checkbox columns in widget**
- Record table widgets showed the drag handle column and checkbox
selection column even though row reordering and multi-select are
meaningless in a read-only widget
- Two new component states
(`isRecordTableDragColumnHiddenComponentState`,
`isRecordTableCheckboxColumnHiddenComponentState`) hide each column
independently; widget tables now display only data columns
**Sticky column layout**
- Sticky positioning of the first three columns used `:nth-of-type` CSS
selectors — when drag or checkbox columns were hidden, the selector
targeted the wrong column and the first data column didn't stick
- Sticky CSS now targets semantic class names
(`RECORD_TABLE_COLUMN_DRAG_AND_DROP_WIDTH_CLASS_NAME`, etc.) so sticky
behavior is correct regardless of which columns are hidden
**Save/Cancel buttons during edit mode**
- Save and Cancel command-menu buttons were unpinned during dashboard
edit mode because the pin logic excluded all items while
`isPageInEditMode` was true
- Items whose availability expression contains `isPageInEditMode` are
now exempted from the unpin rule; Save/Cancel stay pinned during editing
**Title input auto-focus**
- Selecting "Record Table" as widget type auto-focused the title input,
interrupting the configuration flow
- `focusTitleInput` is now `false` when navigating to record table
settings
**Morph relation field error**
- A field with missing `morphRelations` metadata crashed the page with a
"refresh" error from `mapObjectMetadataToGraphQLQuery`
- Now returns an empty array and silently omits the field from the query
instead of crashing
**`updateRecordMutation` prop removal**
- `RecordTableWithWrappers` required callers to pass an
`updateRecordMutation` callback, duplicating `useUpdateOneRecord` at
every usage site
- The mutation is now owned inside `RecordTableContextProvider` via
`RecordTableUpdateContext`; the prop is gone
**Standalone → Widget module rename**
- `record-table-standalone` module renamed to `record-table-widget` —
`StandaloneRecordTable` → `RecordTableWidget`,
`StandaloneRecordTableViewLoadEffect` →
`RecordTableWidgetViewLoadEffect`, etc.
**RecordTableRow cell extraction**
- Row rendering logic (`RecordTableCellDragAndDrop`,
`RecordTableCellCheckbox`, `RecordTableFieldsCells`, hotkey/arrow-key
effects) was duplicated between `RecordTableRow` and
`RecordTableRowVirtualizedFullData`
- Extracted `RecordTableRowCells` (shared cell content) and
`RecordTableStaticTr` (non-draggable `<tr>` wrapper); when drag column
is hidden, rows render inside a static `<tr>` instead of the draggable
wrapper
**View load effect metadata tracking**
- `RecordTableWidgetViewLoadEffect` now tracks
`objectMetadataItem.updatedAt` alongside `viewId` to re-load states when
metadata changes (e.g. field additions), preventing stale column data
**Data source dropdown deduplication**
- Extracted `filterReadableActiveObjectMetadataItems` util, shared by
both chart and record table data source dropdowns — removes duplicated
permission-filtering logic
**RECORD_TABLE view identifier mapping (server)**
- Added `RECORD_TABLE` case to
`fromPageLayoutWidgetConfigurationToUniversalConfiguration` and
`fromUniversalConfigurationToFlatPageLayoutWidgetConfiguration` so
widget views are properly mapped during workspace import/export
**GraphQL error handler typing (server)**
- `formatError` parameter changed from `any` to `unknown`;
`workspaceQueryRunnerGraphqlApiExceptionHandler` broadened from
`QueryFailedErrorWithCode` to `Error | QueryFailedError` — removes
unsafe type casts
**Save hook signature**
- `useSaveRecordTableWidgetsViewDataOnDashboardSave` no longer takes
`pageLayoutId` in constructor; receives it as a callback parameter,
eliminating the need for `useAtomComponentStateCallbackState`
**Customize Dashboard hidden during edit mode**
- The "Customize Dashboard" command was still visible while already
editing — its `conditionalAvailabilityExpression` now includes `not
isPageInEditMode`
**Fields dropdown split**
- `RecordTableFieldsDropdownContent` (300+ lines) split into
`RecordTableFieldsDropdownVisibleFieldsContent` and
`RecordTableFieldsDropdownHiddenFieldsContent`
**Checkbox placeholder cleanup**
- Removed unnecessary `StyledRecordTableTdContainer` wrapper from
`RecordTableCellCheckboxPlaceholder`
Fixes https://github.com/twentyhq/twenty/issues/13103
Newly created workspaces start with metadataVersion = 0. The truthy
check this.currentWorkspace?.metadataVersion && {…} treated 0 as falsy,
so the X-Schema-Version header was never sent. Without this header, the
backend's schema mismatch detection is bypassed and raw validation
errors leak to Sentry. Replaced with isDefined() check.
Also replaced bare negation checks with isDefined() in the backend
validation hook for consistency.
Refactored `backfill-command-menu-items` upgrade command to remove the
manual `QueryRunner` transaction and wrap standard and trigger workflow
version command menu item creation into a single
`validateBuildAndRunWorkspaceMigration` call.
## Summary
- Visual regression dispatch was failing for external contributor PRs
because fork PRs don't have access to repo secrets
(`CI_PRIVILEGED_DISPATCH_TOKEN`)
- Moved the dispatch from inline jobs in `ci-front.yaml` / `ci-ui.yaml`
to a new `workflow_run`-triggered workflow
- `workflow_run` runs in the base repo context and always has access to
secrets, regardless of whether the PR is from a fork
- Follows the same pattern already used by `post-ci-comments.yaml` for
breaking changes dispatch
- Handles the fork case where `workflow_run.pull_requests` is empty by
falling back to a head label search
## Test plan
- [ ] Verify CI Front and CI UI workflows still pass without the removed
jobs
- [ ] Verify the new `visual-regression-dispatch.yaml` triggers after CI
Front / CI UI complete
- [ ] Test with a fork PR to confirm the dispatch succeeds
## Problem
- ⚠️ Multi-edit could silently update ALL records of an object with no
undo, no ctrl-z
- After a batch update the table showed stale data —
`useIncrementalUpdateManyRecords` had no explicit `findMany` refetch and
`skipOptimisticEffect` was missing
- Clicking inside the side panel deselected all kanban cards —
`RecordBoardClickOutsideEffect` uses `refs:[]` and the side panel had no
`data-click-outside-id`
- Clicking a currency/select dropdown inside the panel also triggered
deselection — `FloatingPortal` renders outside the side panel DOM,
bypassing the click-outside-id exclusion
- An empty selection silently matched every record —
`computeContextStoreFilters` returned `undefined` filter when
`selectedRecordIds` was `[]`
## Fix
- Table refreshes correctly after batch update —
`useRefetchFindManyRecords` explicitly refetches `FindMany<Object>`
queries; `useIncrementalUpdateManyRecords` adds `skipOptimisticEffect:
true` and calls it in `finally`
- Clicking the side panel no longer deselects kanban cards —
`SidePanelForDesktop` carries `data-click-outside-id`;
`RecordBoardClickOutsideEffect` +
`RecordTableBodyFocusClickOutsideEffect` exclude it
- Clicking dropdowns inside the panel no longer deselects either —
`ParentClickOutsideIdContext` propagates the side panel ID into
`FloatingPortal` content via `DropdownInternalContainer`
- Empty selection can no longer match all records —
`computeContextStoreFilters` returns `{ id: { in: [] } }` instead of
`undefined`
- Apply is disabled with no selection; a confirmation modal shows the
count + no-undo warning before executing —
`UpdateMultipleRecordsContainer`
## Not included
- Undo / snapshot restore — requires backend changes, out of scope
## Blast radius
- `ParentClickOutsideIdContext` touches `DropdownInternalContainer` (207
`<Dropdown>` usages). `parentClickOutsideId` is `undefined` everywhere
outside the side panel → attribute not rendered → zero behavioral change
for existing consumers.
---------
Co-authored-by: Samuel Arbibe <samuelarbibe@Samuels-MacBook-Pro.local>
Co-authored-by: Lucas Bordeau <bordeau.lucas@gmail.com>
Co-authored-by: Félix Malfait <felix.malfait@gmail.com>
- Makes engineComponentKey non-nullable on CommandMenuItem. Adds two new
engine component keys: TRIGGER_WORKFLOW_VERSION and
FRONT_COMPONENT_RENDERER to cover the former standalone cases.
- Unifies the previously separated engine, front component and workflow
runners into a single `CommandRunner` component with a unified
`useMountCommand` hook that handles all command types.
## Summary
Fixes metadata lifecycle bugs during sign-in, sign-out, and locale
change:
- **Cross-tab sign-out**: Broadcasts sign-out via `BroadcastChannel` so
other tabs clear their session gracefully instead of hitting stale-token
errors
- **SSE teardown on sign-out**: Handles `UNAUTHENTICATED`/`FORBIDDEN`
errors in the SSE event stream effect instead of throwing unhandled
errors
- **Locale switch resilience**: `invalidateAndReload` now invalidates
collection hashes instead of clearing the store to empty, so components
never see 0 metadata items during the reload transition
- **Sign-in background mock**: Uses non-throwing
`objectMetadataItemFamilySelector` instead of hooks that throw on
missing metadata
- **View name placeholders**: Guards against `undefined` `viewName`
during metadata transitions (the minimal metadata query doesn't include
`name`)
- **Session cleanup**: Selective `localStorage` clearing
(`clearSessionLocalStorageKeys`) preserves metadata keys;
`clearAllSessionLocalStorageKeys` for full clears
- **Metadata reload API**: New `useMetadataStoreActions` hook as the
high-level API for metadata lifecycle operations (`applyMockedMetadata`,
`invalidateAndReload`, `loadMockedMetadataAtomic`)
## Test plan
- [ ] Sign out on Tab A → Tab A shows sign-in page with no console
errors
- [ ] Tab B (logged in) receives cross-tab broadcast and redirects to
sign-in
- [ ] No "Forbidden resource" SSE errors in console during sign-out
- [ ] Change language in Settings > Experience → no crash, metadata
refreshes in background
- [ ] Sign back in after sign-out → metadata loads correctly, app is
functional
- [ ] Re-sign-in after locale change → correct locale is preserved
The performCombinedFindManyRecords call was previously fire-and-forget
(.then()), meaning the loading state was set to false and the picker
became interactive before the full records were written into the Apollo
cache.
Fixes https://github.com/twentyhq/twenty/issues/17669
Co-authored-by: Charles Bochet <charlesBochet@users.noreply.github.com>
## Summary
- **Backend**: Add JSON validation for the `blocknote` subfield in rich
text API inputs — rejects values that aren't valid JSON or aren't arrays
(BlockNote content is always `PartialBlock[]`). This prevents corrupted
data from being persisted to the database.
- **Frontend**: Replace all 5 unprotected `JSON.parse` calls on
blocknote content with the safe `parseJson` utility from
`twenty-shared`. Invalid content now degrades gracefully (empty block /
empty string / unchanged passthrough) instead of crashing the app.
- **Tests**: Added integration tests for invalid blocknote JSON (both
GraphQL and REST), unit tests for the new validation, and updated
existing test constants to use valid BlockNote JSON.
## Context
A user reported a `SyntaxError: Expected ',' or ']' after array element`
crash caused by malformed blocknote JSON stored in the database. The
data had `"children":[]` nested inside the `content` array instead of as
a sibling property. The API accepted this invalid JSON because it only
validated that `blocknote` was a string, not that it contained valid
JSON. On the frontend, 5 call sites used bare `JSON.parse` with no error
handling, causing a white-screen crash.
## Test plan
- [x] Unit tests pass: `validate-rich-text-field-or-throw.util.spec.ts`
(10/10)
- [x] Integration tests pass: `rich-text-field-create-input-validation`
(8/8)
- [ ] Verify creating a note with valid rich text still works end-to-end
- [ ] Verify API returns clear error when blocknote contains invalid
JSON
- [ ] Verify frontend renders empty block instead of crashing when
encountering corrupted data
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Added a new IS_GRAPHQL_QUERY_TIMING_ENABLED feature flag that, when
activated per workspace, logs the execution time and operation name of
every GraphQL query across both the standard Yoga pipeline and the
direct execution fast path. The timing context propagates via
AsyncLocalStorage to also instrument buildColumnsToSelect and
formatResult — giving a breakdown of column selection and result
transformation costs without any caller changes.
## Summary
- **Reduce app-dev image size** by stripping ~60MB of build artifacts
not needed at runtime from the server build stage: `.js.map` source maps
(29MB), `.d.ts` type declarations (9MB), compiled test files (14MB), and
unused package source directories (~9MB).
- **Add CI smoke test** for the `twenty-app-dev` all-in-one Docker
image, running in parallel with the existing docker-compose test. Builds
the image, starts the container, and verifies `/healthz` returns 200.
## Test plan
- [x] Built image locally and verified server, worker, Postgres, and
Redis all start correctly
- [x] Verified `/healthz` returns 200 and frontend serves at `/`
- [ ] CI `test-compose` job passes (existing test, renamed from `test`)
- [ ] CI `test-app-dev` job passes (new parallel job)
Made with [Cursor](https://cursor.com)
Automated daily sync of `ai-providers.json` from
[models.dev](https://models.dev).
This PR updates pricing, context windows, and model availability based
on the latest data.
New models meeting inclusion criteria (tool calling, pricing data,
context limits) are added automatically.
Deprecated models are detected based on cost-efficiency within the same
model family.
**Please review before merging** — verify no critical models were
incorrectly deprecated.
Co-authored-by: FelixMalfait <6399865+FelixMalfait@users.noreply.github.com>
## Summary
- New workflow `ci-visual-regression.yaml` that runs on PRs touching
`twenty-ui` or `twenty-shared`
- Builds `twenty-ui` storybook and uploads the tarball as a GitHub
Actions artifact
- Dispatches to `twentyhq/ci-privileged` which handles the pixel-diff
comparison and posts a PR comment
### Flow
```
twenty CI (this PR) ci-privileged pixel-perfect
───────────────── ────────────── ──────────────
Build storybook
Upload artifact
Dispatch ──────────────► Download artifact
Upload to S3 (OIDC)
POST /import-from-storage ────► Import build
POST /diffs/run ──────────────► Screenshots + diff
◄────────────────────────────── Diff report JSON
Post PR comment
```
Companion PRs:
- https://github.com/twentyhq/ci-privileged/pull/1 (ci-privileged
workflow)
- https://github.com/twentyhq/twenty-infra/pull/497 (OIDC trust + Helm
cleanup)
- https://github.com/twentyhq/pixel-perfect/pull/5 (API simplification)
## Test plan
- [ ] Merge companion PRs first and configure secrets/environments
- [ ] Open a test PR touching twenty-ui, verify storybook builds and
dispatch fires
- [ ] Verify visual regression comment appears on the PR
## Summary
- Fixes a bug where removing an option from a MULTI_SELECT field fails
with `invalid input value for enum` when existing records contain the
removed value alongside surviving values.
- The root cause was the `ELSE` branch in `updateArrayEnum` which tried
to cast removed enum values (e.g. `DISTRIBUTOR`) to the new enum type
that no longer includes them.
- The fix replaces the `ELSE` cast with a NULL-producing implicit CASE
default and uses `array_agg(...) FILTER (WHERE mapped_value IS NOT
NULL)` to silently strip removed values from existing arrays.
### Before (bug)
```sql
-- ELSE branch tries to cast removed value to new enum → crash
CASE unnest_value::text
WHEN 'IMPL' THEN 'IMPL'::new_enum
WHEN 'APP' THEN 'APP'::new_enum
ELSE unnest_value::text::new_enum -- 'DISTRIBUTOR' fails here
END
```
### After (fix)
```sql
-- No ELSE: removed values produce NULL, filtered out by array_agg
SELECT array_agg(mapped_value) FILTER (WHERE mapped_value IS NOT NULL)
FROM (
SELECT CASE unnest_value::text
WHEN 'IMPL' THEN 'IMPL'::new_enum
WHEN 'APP' THEN 'APP'::new_enum
END AS mapped_value
FROM unnest(old_column) AS unnest_value
) enum_mapping
```
Three feature flags changed the UI:
IS_NAVIGATION_MENU_ITEM_ENABLED — Sidebar no longer has a default
"People" link → navigate via URL instead
IS_COMMAND_MENU_ITEM_ENABLED — Button label is now "Create new Person"
(interpolated from object name) instead of static "Create new record"
IS_JUNCTION_RELATIONS_ENABLED — Company is now a junction relation
("Previous Companies") displayed inline, no longer a boxed
dynamic-relation-widget on the record page
Updated upgrade command to also backfill command menu items for
workflows.
This has been done in the same command because we need to enable the
feature flag once both operations are complete: the creation of the
standard command menu items and the creation of workflow command menu
items.
## Summary
- File serving endpoints (`GET file/:fileFolder/:id` and `GET
public-assets/...`) were piping S3/local file streams directly to the
response without any HTTP headers, allowing a stored XSS attack via
uploaded HTML files rendered inline on the CRM origin.
- Adds `Content-Type`, `Content-Disposition`, and
`X-Content-Type-Options: nosniff` headers to all file serving responses.
Only known-safe MIME types (images, PDF, plain text, audio, video) are
served inline; everything else (HTML, SVG, XML, etc.) forces
`Content-Disposition: attachment` to trigger download instead of
rendering.
- New `setFileResponseHeaders` utility with an explicit allowlist of
inline-safe MIME types.
## Test plan
- [x] Unit tests pass (9 tests including 2 new ones: header assertions
and attachment-disposition for HTML)
- [x] Lint clean (`lint:diff-with-main`)
- [x] Typecheck clean (`nx typecheck twenty-server`)
- [ ] Manual: upload an HTML file via `uploadWorkflowFile`, access the
returned URL — should download instead of rendering
- [ ] Manual: upload a PNG image, access the URL — should render inline
with correct `Content-Type: image/png`
- [ ] Manual: verify `X-Content-Type-Options: nosniff` header is present
on all file responses
Made with [Cursor](https://cursor.com)
---------
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Etienne <etiennejouan@users.noreply.github.com>
Co-authored-by: Etienne <45695613+etiennejouan@users.noreply.github.com>
## Summary
- The front deploy to S3 is failing because `@ai-sdk/groq` was removed
from `packages/twenty-server/package.json` but the `yarn.lock` was never
updated
## Summary
- Adds a `repository-dispatch` step to the AI catalog sync workflow
(`ci-ai-catalog-sync.yaml`) that notifies `twenty-infra` when a sync PR
is created
- This allows the automerge workflow in `twenty-infra` to reactively
merge the PR instead of waiting for the next cron run
## Test plan
- [ ] Verify `TWENTY_INFRA_TOKEN` secret is available (same one used by
i18n workflows)
- [ ] Trigger the AI catalog sync workflow manually and confirm the
dispatch fires
Made with [Cursor](https://cursor.com)
Automated daily sync of `ai-providers.json` from
[models.dev](https://models.dev).
This PR updates pricing, context windows, and model availability based
on the latest data.
New models meeting inclusion criteria (tool calling, pricing data,
context limits) are added automatically.
Deprecated models are detected based on cost-efficiency within the same
model family.
**Please review before merging** — verify no critical models were
incorrectly deprecated.
Co-authored-by: FelixMalfait <6399865+FelixMalfait@users.noreply.github.com>
## Summary
- The `ci-ai-catalog-sync` cron workflow was failing because the
`ai:sync-models-dev` NestJS command bootstraps the full app, which tries
to connect to PostgreSQL — unavailable in CI.
- Converted the sync logic to a standalone `ts-node` script
(`scripts/ai-sync-models-dev.ts`) that runs without NestJS, eliminating
the database dependency.
- Removed the `Build twenty-server` step from the workflow since it's no
longer needed, making the job faster.
## Test plan
- [x] Verified the standalone script runs successfully locally via `npx
nx run twenty-server:ts-node-no-deps-transpile-only --
./scripts/ai-sync-models-dev.ts`
- [x] Verified `--dry-run` flag works correctly
- [x] Verified the output `ai-providers.json` is correctly written with
valid JSON (135 models across 5 providers)
- [x] Verified the script passes linting with zero errors
- [ ] CI should pass without requiring a database service
Fixes:
https://github.com/twentyhq/twenty/actions/runs/23424202182/job/68135439740
Made with [Cursor](https://cursor.com)
## Summary
- **Add `lingui:compile` to Dockerfile** before both the server and
frontend build stages, ensuring compiled translation catalogs are always
fresh regardless of git state
- **Add `repository-dispatch` to i18n workflows** (`i18n-push.yaml` and
`i18n-pull.yaml`) to trigger reactive automerge in `twenty-infra` when
the i18n PR is ready, replacing the 15-minute polling approach
## Context
Users sometimes see "Uncompiled message detected" errors because
releases can be cut from `main` before the i18n PR (with freshly
compiled translation catalogs) has been merged. This creates a race
condition between new translatable strings landing on `main` and their
compiled catalogs being available.
These changes fix this in two ways:
1. **Safety net in builds**: Every Docker build now compiles
translations before building, so even if compiled catalogs in git are
stale, the build artifact is always correct
2. **Faster i18n PR merges**: Instead of a 15-minute cron polling for
i18n PRs, the workflows now notify `twenty-infra` immediately when
translations are ready, reducing merge latency from ~15 minutes to ~1
minute
Companion PR in twenty-infra: twentyhq/twenty-infra
(feat/i18n-reactive-automerge)
## Test plan
- [ ] Verify `TWENTY_INFRA_TOKEN` secret is available to i18n workflows
- [ ] Docker build still succeeds with the added `lingui:compile` steps
- [ ] i18n-push triggers automerge in twenty-infra after pushing changes
- [ ] i18n-pull triggers automerge in twenty-infra after pulling
translations
Made with [Cursor](https://cursor.com)
## Summary
- Removes the `isBillingEnabled` guard that was hiding the Providers and
Custom Providers sections in the Admin AI settings
- The `GET_AI_PROVIDERS` query was being skipped when billing was
enabled, and the provider UI sections were conditionally hidden —
there's no reason to gate provider configuration behind billing status
- Cleans up the now-unused `billingState` and `useAtomStateValue`
imports
## Test plan
- [ ] Verify Providers and Custom Providers sections are visible in
Admin > AI settings on cloud (billing enabled)
- [ ] Verify they remain visible on self-hosted (billing disabled)
Made with [Cursor](https://cursor.com)
## Summary
This PR adds a comprehensive billing usage analytics feature that
provides detailed breakdowns of credit consumption across execution
types, users, resources, and time periods. The implementation includes a
new ClickHouse-backed analytics service, GraphQL API endpoint, and a
frontend dashboard component.
## Key Changes
### Backend
- **New BillingAnalyticsService**: Queries ClickHouse for usage
breakdowns by user, resource, execution type, and time series data
- **BillingEventWriterService**: Writes billing events to ClickHouse for
analytics while maintaining best-effort semantics (never blocks Stripe
billing)
- **ClickHouse Schema**: Added `billingEvent` table with 3-year TTL for
storing detailed billing event data
- **GraphQL Resolver**: New `getBillingAnalytics` query that aggregates
usage data for the current billing period, protected by feature flag and
billing permissions
- **Enhanced BillingUsageEvent**: Added `userWorkspaceId` field to track
per-user credit consumption
- **AI Billing Integration**: Updated AI billing service to pass
`userWorkspaceId` when recording usage events
### Frontend
- **SettingsBillingAnalyticsSection**: New component displaying:
- Usage breakdown by execution type with progress bars
- Daily usage time series chart (28-day view)
- Per-user credit consumption breakdown
- Per-resource (agent/workflow) credit consumption breakdown
- **SettingsUsage Page**: Dedicated page for viewing usage analytics
- **GraphQL Query**: `GetBillingAnalytics` query with generated hooks
- **Navigation**: Added Usage menu item in settings (feature-flagged)
- **Mock Data**: Included screenshot mock data for preview/testing
### Feature Flag
- Added `IS_USAGE_ANALYTICS_ENABLED` feature flag to control visibility
and access to analytics features
## Implementation Details
- Analytics data is queried in parallel for performance
- ClickHouse writes are non-blocking to ensure billing operations never
fail
- Progress bars use dynamic coloring from a predefined palette
- Time series visualization normalizes bar heights relative to max value
- Empty state handling when no analytics data is available
- Responsive UI with proper text truncation for long names
https://claude.ai/code/session_01Y1EqrX6PFq3EJxJq89h7DF
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- The `MigrateModelIdsToCompositeFormat` migration was setting
`agent.modelId = NULL`, but the column has a `NOT NULL` constraint —
causing the migration to fail on deploy.
- Fixed by setting `modelId` to the `'default-smart-model'` sentinel
value instead of NULL. The runtime already resolves this sentinel
dynamically via `getEffectiveModelConfig` / `isDefaultModelSentinel`, so
agents correctly fall back to the admin-configured smart model.
## Test plan
- [ ] Run `npx nx run twenty-server:database:migrate:prod` — migration
should complete without errors
- [ ] Verify agents still resolve to the correct model at runtime
(sentinel → `getDefaultPerformanceModel()`)
Made with [Cursor](https://cursor.com)
- Renamed FieldConfiguration's layout field to fieldDisplayMode as it
caused issues with the layout field of BarChartConfiguration
- Create relation Field widgets for standard objects
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
- Allow users to create a Fields widget or a Field widget; **this PR is
focused on Fields widgets as Field widget can't be configured yet**
- Automatically create a filled view when the user creates a draft
Fields widget in edit mode
https://github.com/user-attachments/assets/b2dbba52-c614-44cd-bf6c-095ce9d4ec26
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
## Automated fix for [bug 6848](https://sonarly.com/issue/6848?type=bug)
**Severity:** `critical`
### Summary
WorkflowEditActionCodeFields.tsx calls Object.entries(functionInput)
without a null guard on line 40, crashing when
action.settings.input.logicFunctionInput is undefined. This happens
because workflow version step data in the database may contain the old
field name serverlessFunctionInput (from before the rename in commit
da6f1bbef3), and no data migration was created to update JSON keys
inside the steps JSONB column.
### User Impact
Users viewing or editing a workflow version containing a CODE step that
was created before the serverlessFunction-to-logicFunction rename see a
crash. The page fails to render the workflow step detail panel,
preventing them from viewing or modifying their workflow.
### Root Cause
Proximate cause: Object.entries(functionInput) on line 40 of
WorkflowEditActionCodeFields.tsx throws TypeError because functionInput
is undefined. The stack trace confirms this: the crash occurs during
React rendering (through scheduler -> react-dom render pipeline ->
WorkflowEditActionCodeFields:40:15 -> Object.entries).
1. Why did Object.entries throw? Because the functionInput prop is
undefined. It is initialized from
action.settings.input.logicFunctionInput via useState on line 142 of
WorkflowEditActionCode.tsx, with no fallback value.
2. Why is action.settings.input.logicFunctionInput undefined? Because
the workflow version step data stored in the database JSONB steps column
contains the OLD field name serverlessFunctionInput instead of
logicFunctionInput. When the frontend accesses logicFunctionInput on the
deserialized JSON object, it returns undefined.
3. Why does the database still have the old field name? Because commit
da6f1bbef3 (Rename serverlessFunction to logicFunction, Jan 28 2026)
renamed the Zod schema field from serverlessFunctionInput to
logicFunctionInput, and the database migration
1769556947746-renameServerless.ts only renames SQL tables and columns
(serverlessFunction table to logicFunction, etc.) but does NOT update
the JSON keys inside the steps JSONB column of the workflowVersion
table.
4. Why was no JSON data migration created? The rename was a large-scale
refactoring across the entire codebase. The steps column stores workflow
action data as opaque JSON, and the migration only addressed relational
schema changes (table/column renames) without updating the embedded JSON
document structure. There is no upgrade command in 1-17, 1-18, or 1-19
directories that transforms the JSON keys from serverlessFunctionInput
to logicFunctionInput.
5. Why did the component not handle this gracefully? The
WorkflowEditActionCodeFields component has no null guard on the
functionInput prop before calling Object.entries. Notably, the analogous
WorkflowEditActionLogicFunction component DOES have a null guard on line
52 (action.settings.input.logicFunctionInput ?? {}), showing the team
was aware of this possibility in one component but missed it in the
other.
**Introduced by:** martmull on 2026-02-16 in commit
[`da064d5`](da064d5e88)
### Suggested Fix
Two minimal null guards are added, matching the pattern already used in
the analogous WorkflowEditActionLogicFunction component. First, in
WorkflowEditActionCode.tsx the useState initializer is changed from
action.settings.input.logicFunctionInput to
action.settings.input.logicFunctionInput ?? {} so that functionInput is
always an empty object rather than undefined when the DB record still
uses the old serverlessFunctionInput key. Second, in
WorkflowEditActionCodeFields.tsx the Object.entries call is changed from
Object.entries(functionInput) to Object.entries(functionInput ?? {}) as
a defensive guard at the render layer, ensuring the component renders
safely even if an undefined value is passed from any caller.
### Evidence
- **Code:**
[packages/twenty-front/src/modules/workflow/workflow-steps/workflow-actions/code-action/components/WorkflowEditActionCode.tsx:142
- const [functionInput, setFunctionInput]
=](https://github.com/twentyhq/twenty/blob/main/packages/twenty-front/src/modules/workflow/workflow-steps/workflow-actions/code-action/components/WorkflowEditActionCode.tsx#L142)
- **Code:**
[packages/twenty-front/src/modules/workflow/workflow-steps/workflow-actions/code-action/components/WorkflowEditActionCodeFields.tsx:40
- {Object.entries(functionInput ?? {}).map(([inputKey, inputValue]) =>
{](https://github.com/twentyhq/twenty/blob/main/packages/twenty-front/src/modules/workflow/workflow-steps/workflow-actions/code-action/components/WorkflowEditActionCodeFields.tsx#L40)
---
*Generated by [Sonarly](https://sonarly.com)*
Co-authored-by: Sonarly Claude Code <claude-code@sonarly.com>
- Automatically create and sync command menu items for all workflows
with a manual trigger
- Refactor `useCommandMenuItemsFromBackend`
- Prefill a _Quick Lead_ workflow command menu item during workspace
setup and dev seeding
- Add a ready prop to `HeadlessEngineCommandWrapperEffect` to prevent
premature execution when async data hasn't loaded yet
---------
Co-authored-by: Charles Bochet <charles@twenty.com>
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
Twenty is an open-source CRM built with modern technologies in a monorepo structure. The codebase is organized as an Nx workspace with multiple packages.
## Key Commands
### Development
```bash
# Start development environment (frontend + backend + worker)
yarn start
# Individual package development
npx nx start twenty-front # Start frontend dev server
npx nx start twenty-server # Start backend server
npx nx run twenty-server:worker # Start background worker
```
### Testing
```bash
# Preferred: run a single test file (fast)
npx jest path/to/test.test.ts --config=packages/PROJECT/jest.config.mjs
# Run all tests for a package
npx nx test twenty-front # Frontend unit tests
npx nx test twenty-server # Backend unit tests
npx nx run twenty-server:test:integration:with-db-reset # Integration tests with DB reset
# To run an indivual test or a pattern of tests, use the following command:
cd packages/{workspace} && npx jest "pattern or filename"
# Storybook
npx nx storybook:build twenty-front
npx nx storybook:test twenty-front
# When testing the UI end to end, click on "Continue with Email" and use the prefilled credentials.
```
### Code Quality
```bash
# Linting (diff with main - fastest, always prefer this)
IMPORTANT: Use Context7 for code generation, setup or configuration steps, or library/API documentation. Automatically use the Context7 MCP tools to resolve library IDs and get library docs without waiting for explicit requests.
### Before Making Changes
1. Always run linting (`lint:diff-with-main`) and type checking after code changes
2. Test changes with relevant test suites (prefer single-file test runs)
3. Ensure instance commands are generated for entity changes (`database:migrate:generate`)
4. Check that GraphQL schema changes are backward compatible
5. Run `graphql:generate` after any GraphQL schema changes
### Code Style Notes
- Use **Linaria** for styling with zero-runtime CSS-in-JS (styled-components pattern)
- Follow **Nx** workspace conventions for imports
- Use **Lingui** for internationalization
- Apply security first, then formatting (sanitize before format)
### Testing Strategy
- **Test behavior, not implementation** — focus on user perspective
- **Test pyramid**: 70% unit, 20% integration, 10% E2E
- Query by user-visible elements (text, roles, labels) over test IDs
- Use `@testing-library/user-event` for realistic interactions
- Descriptive test names: "should [behavior] when [condition]"
- Clear mocks between tests with `jest.clearAllMocks()`
## Dev Environment Setup
All dev environments (Claude Code web, Cursor, local) use one script:
```bash
bash packages/twenty-utils/setup-dev-env.sh
```
This handles everything: starts Postgres + Redis (auto-detects local services vs Docker), creates databases, and copies `.env` files. Idempotent — safe to run multiple times.
- `--docker` — force Docker mode (uses `packages/twenty-docker/docker-compose.dev.yml`)
- `--down` — stop services
- `--reset` — wipe data and restart fresh
- **Skip the setup script** for tasks that only read code — architecture questions, code review, documentation, etc.
**Note:** CI workflows (GitHub Actions) manage services via Actions service containers and run setup steps individually — they don't use this script.
## Important Files
- `nx.json` - Nx workspace configuration with task definitions
- `tsconfig.base.json` - Base TypeScript configuration
- `package.json` - Root package with workspace definitions
- `.cursor/rules/` - Detailed development guidelines and best practices
- **nx-rules.mdc** - Nx workspace guidelines and best practices (Auto-attached to Nx files)
- **server-migrations.mdc** - Backend migration and TypeORM guidelines for `twenty-server` (Auto-attached to server entities and migration files)
- **server-migrations.mdc** - Upgrade command guidelines (instance commands and workspace commands) for `twenty-server` (Auto-attached to server entities and upgrade command files)
- **creating-syncable-entity.mdc** - Comprehensive guide for creating new syncable entities (with universalIdentifier and applicationId) in the workspace migration system (Agent-requested for metadata-modules and workspace-migration files)
### Code Quality
@ -81,11 +81,8 @@ npx nx run twenty-server:typecheck # Type checking
npx nx run twenty-server:test # Run unit tests
npx nx run twenty-server:test:integration:with-db-reset # Run integration tests
# Migrations
npx nx run twenty-server:typeorm migration:generate src/database/typeorm/core/migrations/[name] -d src/database/typeorm/core/core.datasource.ts
# Workspace
npx nx run twenty-server:command workspace:sync-metadata -f # Sync metadata
# Upgrade commands (instance + workspace)
npx nx run twenty-server:database:migrate:generate --name <name> --type <fast|slow>
- **When changing an entity, always generate a migration**
- If you modify a `*.entity.ts` file in `packages/twenty-server/src`, you **must** generate a corresponding TypeORM migration instead of manually editing the database schema.
- Use the Nx + TypeORM command from the project root:
The upgrade system uses two types of commands instead of raw TypeORM migrations:
- **Instance commands** — schema and data migrations that run once at the instance level.
- **Workspace commands** — commands that iterate over all active/suspended workspaces.
See `packages/twenty-server/docs/UPGRADE_COMMANDS.md` for full documentation.
### Instance Commands
- **When changing a `*.entity.ts` file**, generate an instance command:
```bash
npx nx run twenty-server:typeorm migration:generate src/database/typeorm/core/migrations/common/[name] -d src/database/typeorm/core/core.datasource.ts
npx nx run twenty-server:database:migrate:generate --name <name> --type <fast|slow>
```
- Replace `[name]` with a descriptive, kebab-case migration name that reflects the change (for example, `add-agent-turn-evaluation`).
- **Fast commands** (`--type fast`, default) are for schema-only changes that must run immediately. They implement `FastInstanceCommand` with `up`/`down` methods and use the `@RegisteredInstanceCommand` decorator.
- **Prefer generated migrations over manual edits**
- Let TypeORM infer schema changes from the updated entities; only adjust the generated migration file manually if absolutely necessary (for example, for data backfills or complex constraints).
- Keep schema changes (DDL) in these generated migrations and avoid mixing in heavy data migrations unless there is a strong reason and clear comments.
- **Slow commands** (`--type slow`) add a `runDataMigration` method for potentially long-running data backfills that execute before `up`. They only run when `--include-slow` is passed. Use the decorator with `{ type: 'slow' }`.
- **Keep migrations consistent and reversible**
- Ensure the generated migration includes both `up` and `down` logic that correctly applies and reverts the entity change when possible.
- Do not delete or rewrite existing, committed migrations unless you are explicitly working on a pre-release branch where history rewrites are allowed by team conventions.
- The generator auto-registers the command in `instance-commands.constant.ts` — do not edit that file manually.
- **Keep commands consistent and reversible**: include both `up` and `down` logic. Do not delete or rewrite existing, committed commands unless on a pre-release branch.
### Workspace Commands
- Use the `@RegisteredWorkspaceCommand` decorator alongside nest-commander's `@Command` decorator.
- Extend `ActiveOrSuspendedWorkspaceCommandRunner` and implement `runOnWorkspace`.
- The base class provides `--dry-run`, `--verbose`, and workspace filter options automatically.
### Execution Order
Within a given version, commands run in this order (timestamp-sorted within each group):
1. Instance fast commands
2. Instance slow commands (only with `--include-slow`)
echo "::error::Unexpected migration files were generated. Please run 'npx nx database:migrate:generate twenty-server -- --name <migration-name>' and commit the result."
echo ""
echo "The following migration changes were detected:"
npx nx run twenty-front:graphql:generate --configuration=metadata
npx nx run twenty-front:graphql:generate --configuration=admin
if ! git diff --quiet -- packages/twenty-front/src/generated packages/twenty-front/src/generated-metadata; then
echo "::error::GraphQL schema changes detected. Please run 'npx nx run twenty-front:graphql:generate' and 'npx nx run twenty-front:graphql:generate --configuration=metadata' and commit the changes."
if ! git diff --quiet -- packages/twenty-front/src/generated packages/twenty-front/src/generated-metadata packages/twenty-front/src/generated-admin; then
echo "::error::GraphQL schema changes detected. Please run the three graphql:generate configurations ('data', 'metadata', 'admin') and commit the changes."
echo ""
echo "The following GraphQL schema changes were detected:"
Twenty gives technical teams the building blocks for a custom CRM that meets complex business needs and quickly adapts as the business evolves. Twenty is the CRM you build, ship, and version like the rest of your stack.
**CRMs are too expensive, and users are trapped.** Companies use locked-in customer data to hike prices. It shouldn't be that way.
**A fresh start is required to build a better experience.** We can learn from past mistakes and craft a cohesive experience inspired by new UX patterns from tools like Notion, Airtable or Linear.
**We believe in open-source and community.** Hundreds of developers are already building Twenty together. Once we have plugin capabilities, a whole ecosystem will grow around it.
<ahref="https://twenty.com/why-twenty"><imgsrc="./packages/twenty-website/public/images/readme/star-icon.svg"width="14"height="14"/> Learn more about why we built Twenty</a>
<br/>
# What You Can Do With Twenty
# Installation
Please feel free to flag any specific needs you have by creating an issue.
Below are a few features we have implemented to date:
The fastest way to get started. Sign up at [twenty.com](https://twenty.com) and spin up a workspace in under a minute, with no infrastructure to manage and always up to date.
+ [Personalize layouts with filters, sort, group by, kanban and table views](#personalize-layouts-with-filters-sort-group-by-kanban-and-table-views)
+ [Customize your objects and fields](#customize-your-objects-and-fields)
+ [Create and manage permissions with custom roles](#create-and-manage-permissions-with-custom-roles)
+ [Automate workflow with triggers and actions](#automate-workflow-with-triggers-and-actions)
+ [Emails, calendar events, files, and more](#emails-calendar-events-files-and-more)
### <imgsrc="./packages/twenty-website/public/images/readme/book-icon.svg"width="14"height="14"/> Build an app
Scaffold a new app with the Twenty CLI:
## Personalize layouts with filters, sort, group by, kanban and table views
Run Twenty on your own infrastructure with [Docker Compose](https://docs.twenty.com/developers/self-host/capabilities/docker-compose), or contribute locally via the [local setup guide](https://docs.twenty.com/developers/contribute/capabilities/local-setup).
Twenty gives you the building blocks of a modern CRM (objects, views, workflows, and agents) and lets you extend them as code. Here's a tour of what's in the box.
Want to go deeper? Read the <ahref="https://docs.twenty.com/user-guide/introduction"><imgsrc="./packages/twenty-website/public/images/readme/planner-icon.svg"width="14"height="14"/> User Guide</a> for product walkthroughs, or the <ahref="https://docs.twenty.com"><imgsrc="./packages/twenty-website/public/images/readme/book-icon.svg"width="14"height="14"/> Documentation</a> for developer reference.
<imgsrc="./packages/twenty-website/public/images/readme/v2-build-apps-light.png"alt="Create your apps"/>
</picture>
<palign="center"><ahref="https://docs.twenty.com/developers/extend/apps/getting-started"><imgsrc="./packages/twenty-website/public/images/readme/code-icon.svg"width="16"height="16"/> Learn more about apps in doc</a></p>
<imgsrc="./packages/twenty-website/public/images/readme/v2-version-control-light.png"alt="Stay on top with version control"/>
</picture>
<palign="center"><ahref="https://docs.twenty.com/developers/extend/apps/publishing"><imgsrc="./packages/twenty-website/public/images/readme/monitor-icon.svg"width="16"height="16"/> Learn more about version control in doc</a></p>
<imgsrc="./packages/twenty-website/public/images/readme/v2-all-tools-light.png"alt="All the tools you need to build anything"/>
</picture>
<palign="center"><ahref="https://docs.twenty.com/developers/extend/apps/building"><imgsrc="./packages/twenty-website/public/images/readme/rocket-icon.svg"width="16"height="16"/> Learn more about primitives in doc</a></p>
<imgsrc="./packages/twenty-website/public/images/readme/v2-tools-light.png"alt="Customize your layouts"/>
</picture>
<palign="center"><ahref="https://docs.twenty.com/user-guide/layout/overview"><imgsrc="./packages/twenty-website/public/images/readme/planner-icon.svg"width="16"height="16"/> Learn more about layouts in doc</a></p>
<imgsrc="./packages/twenty-website/public/images/readme/v2-ai-agents-light.png"alt="AI agents and chats"/>
</picture>
<palign="center"><ahref="https://docs.twenty.com/user-guide/ai/overview"><imgsrc="./packages/twenty-website/public/images/readme/message-icon.svg"width="16"height="16"/> Learn more about AI in doc</a></p>
<imgsrc="./packages/twenty-website/public/images/readme/v2-crm-tools-light.png"alt="Plus all the tools of a good CRM"/>
</picture>
<palign="center"><ahref="https://docs.twenty.com/user-guide/introduction"><imgsrc="./packages/twenty-website/public/images/readme/star-icon.svg"width="16"height="16"/> Learn more about CRM features in doc</a></p>
</td>
</tr>
</table>
<br/>
# Stack
- [TypeScript](https://www.typescriptlang.org/)
- [Nx](https://nx.dev/)
- [NestJS](https://nestjs.com/), with [BullMQ](https://bullmq.io/), [PostgreSQL](https://www.postgresql.org/), [Redis](https://redis.io/)
- [React](https://reactjs.org/), with [Jotai](https://jotai.org/), [Linaria](https://linaria.dev/) and [Lingui](https://lingui.dev/)
- <ahref="https://nestjs.com/"><imgsrc="./packages/twenty-website/public/images/readme/stack-nestjs.svg"width="14"height="14"/> NestJS</a>, with <ahref="https://bullmq.io/">BullMQ</a>, <ahref="https://www.postgresql.org/"><imgsrc="./packages/twenty-website/public/images/readme/stack-postgresql.svg"width="14"height="14"/> PostgreSQL</a>, <ahref="https://redis.io/"><imgsrc="./packages/twenty-website/public/images/readme/stack-redis.svg"width="14"height="14"/> Redis</a>
- <ahref="https://reactjs.org/"><imgsrc="./packages/twenty-website/public/images/readme/stack-react.svg"width="14"height="14"/> React</a>, with <ahref="https://jotai.org/">Jotai</a>, <ahref="https://linaria.dev/">Linaria</a> and <ahref="https://lingui.dev/">Lingui</a>
Thanks to these amazing services that we use and recommend for UI testing (Chromatic), code review (Greptile), catching bugs (Sentry) and translating (Crowdin).
@ -128,9 +172,4 @@ Below are a few features we have implemented to date:
# Join the Community
- Star the repo
- Subscribe to releases (watch -> custom -> releases)
- Follow us on [Twitter](https://twitter.com/twentycrm) or [LinkedIn](https://www.linkedin.com/company/twenty/)
Create Twenty App is the official scaffolding CLI for building apps on top of [Twenty CRM](https://twenty.com). It sets up a ready‑to‑run project that works seamlessly with the [twenty-sdk](https://www.npmjs.com/package/twenty-sdk).
- Zero‑config project bootstrap
- Preconfigured scripts for auth, dev mode (watch & sync), uninstall, and function management
- Strong TypeScript support and typed client generation
## Documentation
See Twenty application documentation https://docs.twenty.com/developers/extend/capabilities/apps
## Prerequisites
- Node.js 24+ (recommended) and Yarn 4
- Docker (for the local Twenty dev server)
The official scaffolding CLI for building apps on top of [Twenty CRM](https://twenty.com). Sets up a ready-to-run project with [twenty-sdk](https://www.npmjs.com/package/twenty-sdk).
## Quick start
```bash
# Scaffold a new app — the CLI will offer to start a local Twenty server
npx create-twenty-app@latest my-twenty-app
cd my-twenty-app
# The scaffolder can automatically:
# 1. Start a local Twenty server (Docker)
# 2. Open the browser to log in (tim@apple.dev / tim@apple.dev)
# 3. Authenticate your app via OAuth
# Or do it manually:
yarn twenty server start # Start local Twenty server
yarn twenty remote add --local # Authenticate via OAuth
# Start dev mode: watches, builds, and syncs local changes to your workspace
- A prewired `twenty` script that delegates to the `twenty` CLI from twenty-sdk
Full documentation is available at **[docs.twenty.com/developers/extend/apps](https://docs.twenty.com/developers/extend/apps/getting-started)**:
**Example files (controlled by scaffolding mode):**
- `objects/example-object.ts` — Example custom object with a text field
- `fields/example-field.ts` — Example standalone field extending the example object
- `logic-functions/hello-world.ts` — Example logic function with HTTP trigger
- `front-components/hello-world.tsx` — Example front component
- `views/example-view.ts` — Example saved view for the example object
- `navigation-menu-items/example-navigation-menu-item.ts` — Example sidebar navigation link
- `skills/example-skill.ts` — Example AI agent skill definition
- `__tests__/app-install.integration-test.ts` — Integration test that builds, installs, and verifies the app (includes `vitest.config.ts`, `tsconfig.spec.json`, and a setup file)
## Local server
The scaffolder can start a local Twenty dev server for you (all-in-one Docker image with PostgreSQL, Redis, server, and worker). You can also manage it manually:
```bash
yarn twenty server start # Start (pulls image if needed)
yarn twenty server status # Check if it's healthy
yarn twenty server logs # Stream logs
yarn twenty server stop # Stop (data is preserved)
yarn twenty server reset # Wipe all data and start fresh
```
The server is pre-seeded with a workspace and user (`tim@apple.dev` / `tim@apple.dev`).
## Next steps
- Run `yarn twenty help` to see all available commands.
- Use `yarn twenty remote add --local` to authenticate with your Twenty workspace via OAuth.
- Explore the generated project and add your first entity with `yarn twenty add` (logic functions, front components, objects, roles, views, navigation menu items, skills).
- Use `yarn twenty dev` while you iterate — it watches, builds, and syncs changes to your workspace in real time.
- `CoreApiClient` (for workspace data via `/graphql`) is auto-generated by `yarn twenty dev`. `MetadataApiClient` (for workspace configuration and file uploads via `/metadata`) ships pre-built with the SDK. Both are available via `import { CoreApiClient, MetadataApiClient } from 'twenty-sdk/clients'`.
## Build and publish your application
Once your app is ready, build and publish it using the CLI:
```bash
# Build the app (output goes to .twenty/output/)
yarn twenty build
# Build and create a tarball (.tgz) for distribution
yarn twenty build --tarball
# Publish to npm (requires npm login)
yarn twenty publish
# Publish with a dist-tag (e.g. beta, next)
yarn twenty publish --tag beta
# Deploy directly to a Twenty server (builds, uploads, and installs in one step)
yarn twenty deploy
```
### Publish to the Twenty marketplace
You can also contribute your application to the curated marketplace:
```bash
git clone https://github.com/twentyhq/twenty.git
cd twenty
git checkout -b feature/my-awesome-app
```
- Copy your app folder into `twenty/packages/twenty-apps`.
- Commit your changes and open a pull request on https://github.com/twentyhq/twenty
Our team reviews contributions for quality, security, and reusability before merging.
- [Getting Started](https://docs.twenty.com/developers/extend/apps/getting-started) — step-by-step setup, project structure, server management, CI
- [Building Apps](https://docs.twenty.com/developers/extend/apps/building) — entity definitions, API clients, testing
- Creating an object without an index view associated. Unless this is a technical object, user will need to visualize it.
- Creating a view without a navigationMenuItem associated. This will make the view available on the left sidebar.
- Creating a front-end component that has a scroll instead of being responsive to its fixed widget height and width, unless it is specifically meant to be used in a canvas tab.
- Creating an object without an index view associated. Unless this is a technical object, user will need to visualize it.
- Creating a view without a navigationMenuItem associated. This will make the view available on the left sidebar.
- Creating a front-end component that has a scroll instead of being responsive to its fixed widget height and width, unless it is specifically meant to be used in a canvas tab.
- Creating an object without an index view associated. Unless this is a technical object, user will need to visualize it.
- Creating a view without a navigationMenuItem associated. This will make the view available on the left sidebar.
- Creating a front-end component that has a scroll instead of being responsive to its fixed widget height and width, unless it is specifically meant to be used in a canvas tab.
- Creating an object without an index view associated. Unless this is a technical object, user will need to visualize it.
- Creating a view without a navigationMenuItem associated. This will make the view available on the left sidebar.
- Creating a front-end component that has a scroll instead of being responsive to its fixed widget height and width, unless it is specifically meant to be used in a canvas tab.