mirror of
https://github.com/lobehub/lobehub
synced 2026-04-21 09:37:28 +00:00
✨ feat: support agent benchmark (#12355)
* improve total fix page size issue fix error message handler fix eval home page try to fix batch run agent step issue fix run list fix dataset loading fix abort issue improve jump and table column fix error streaming try to fix error output in vercel refactor qstash workflow client improve passK add evals to proxy refactor metrics try to fix build refactor tests improve detail page fix passK issue improve eval-rubric fix types support passK fix type update fix db insert issue improve dataset ui improve run config finish step limit now add step limited 100% coverage to models add failed tests todo support interruptOperation fix lint improve report detail improve pass rate improve sort order issue fix timeout issue Update db schema 完整 case 跑通 update database improve error handling refactor to improve database 优化 test case 的处理流程 优化部分细节体验和实现 基本完成 Benchmark 全流程功能 优化 run case 展示 优化 run case 序号问题 优化 eval test case 页面 新增 eval test 模式 新增 dataset 页面 update schema support finish create test run fix update improve import exp refactor data flow improve import workflow rubric Benchmark detail 页面 improve import ux update schema finish eval home page add eval workflow endpoint implement benchmark run model refactor RAG eval implement backend update db schema update db migration init benchmark * support rerun error test case * fix tests * fix tests
This commit is contained in:
parent
c2280561f5
commit
e7598fe90b
243 changed files with 31692 additions and 246 deletions
1175
.agents/skills/data-fetching/SKILL.md
Normal file
1175
.agents/skills/data-fetching/SKILL.md
Normal file
File diff suppressed because it is too large
Load diff
|
|
@ -115,6 +115,91 @@ export const agentsKnowledgeBases = pgTable(
|
|||
);
|
||||
```
|
||||
|
||||
## Query Style
|
||||
|
||||
**Always use `db.select()` builder API. Never use `db.query.*` relational API** (`findMany`, `findFirst`, `with:`).
|
||||
|
||||
The relational API generates complex lateral joins with `json_build_array` that are fragile and hard to debug.
|
||||
|
||||
### Select Single Row
|
||||
|
||||
```typescript
|
||||
// ✅ Good
|
||||
const [result] = await this.db
|
||||
.select()
|
||||
.from(agents)
|
||||
.where(eq(agents.id, id))
|
||||
.limit(1);
|
||||
return result;
|
||||
|
||||
// ❌ Bad: relational API
|
||||
return this.db.query.agents.findFirst({
|
||||
where: eq(agents.id, id),
|
||||
});
|
||||
```
|
||||
|
||||
### Select with JOIN
|
||||
|
||||
```typescript
|
||||
// ✅ Good: explicit select + leftJoin
|
||||
const rows = await this.db
|
||||
.select({
|
||||
runId: agentEvalRunTopics.runId,
|
||||
score: agentEvalRunTopics.score,
|
||||
testCase: agentEvalTestCases,
|
||||
topic: topics,
|
||||
})
|
||||
.from(agentEvalRunTopics)
|
||||
.leftJoin(agentEvalTestCases, eq(agentEvalRunTopics.testCaseId, agentEvalTestCases.id))
|
||||
.leftJoin(topics, eq(agentEvalRunTopics.topicId, topics.id))
|
||||
.where(eq(agentEvalRunTopics.runId, runId))
|
||||
.orderBy(asc(agentEvalRunTopics.createdAt));
|
||||
|
||||
// ❌ Bad: relational API with `with:`
|
||||
return this.db.query.agentEvalRunTopics.findMany({
|
||||
where: eq(agentEvalRunTopics.runId, runId),
|
||||
with: { testCase: true, topic: true },
|
||||
});
|
||||
```
|
||||
|
||||
### Select with Aggregation
|
||||
|
||||
```typescript
|
||||
// ✅ Good: select + leftJoin + groupBy
|
||||
const rows = await this.db
|
||||
.select({
|
||||
id: agentEvalDatasets.id,
|
||||
name: agentEvalDatasets.name,
|
||||
testCaseCount: count(agentEvalTestCases.id).as('testCaseCount'),
|
||||
})
|
||||
.from(agentEvalDatasets)
|
||||
.leftJoin(agentEvalTestCases, eq(agentEvalDatasets.id, agentEvalTestCases.datasetId))
|
||||
.groupBy(agentEvalDatasets.id);
|
||||
```
|
||||
|
||||
### One-to-Many (Separate Queries)
|
||||
|
||||
When you need a parent record with its children, use two queries instead of relational `with:`:
|
||||
|
||||
```typescript
|
||||
// ✅ Good: two simple queries
|
||||
const [dataset] = await this.db
|
||||
.select()
|
||||
.from(agentEvalDatasets)
|
||||
.where(eq(agentEvalDatasets.id, id))
|
||||
.limit(1);
|
||||
|
||||
if (!dataset) return undefined;
|
||||
|
||||
const testCases = await this.db
|
||||
.select()
|
||||
.from(agentEvalTestCases)
|
||||
.where(eq(agentEvalTestCases.datasetId, id))
|
||||
.orderBy(asc(agentEvalTestCases.sortOrder));
|
||||
|
||||
return { ...dataset, testCases };
|
||||
```
|
||||
|
||||
## Database Migrations
|
||||
|
||||
See `references/db-migrations.md` for detailed migration guide.
|
||||
|
|
@ -129,14 +214,27 @@ bun run db:generate:client
|
|||
|
||||
### Migration Best Practices
|
||||
|
||||
All migration SQL must be **idempotent** (safe to re-run):
|
||||
|
||||
```sql
|
||||
-- ✅ Idempotent operations
|
||||
-- ✅ Tables: IF NOT EXISTS
|
||||
CREATE TABLE IF NOT EXISTS "agent_eval_runs" (...);
|
||||
|
||||
-- ✅ Columns: IF NOT EXISTS / IF EXISTS
|
||||
ALTER TABLE "users" ADD COLUMN IF NOT EXISTS "avatar" text;
|
||||
DROP TABLE IF EXISTS "old_table";
|
||||
ALTER TABLE "users" DROP COLUMN IF EXISTS "old_field";
|
||||
|
||||
-- ✅ Foreign keys: DROP IF EXISTS + ADD (no IF NOT EXISTS for constraints)
|
||||
ALTER TABLE "t" DROP CONSTRAINT IF EXISTS "t_fk";
|
||||
ALTER TABLE "t" ADD CONSTRAINT "t_fk" FOREIGN KEY ("col") REFERENCES "ref"("id") ON DELETE cascade;
|
||||
|
||||
-- ✅ Indexes: IF NOT EXISTS
|
||||
CREATE INDEX IF NOT EXISTS "users_email_idx" ON "users" ("email");
|
||||
|
||||
-- ❌ Non-idempotent
|
||||
-- ❌ Non-idempotent (will fail on re-run)
|
||||
CREATE TABLE "agent_eval_runs" (...);
|
||||
ALTER TABLE "users" ADD COLUMN "avatar" text;
|
||||
ALTER TABLE "t" ADD CONSTRAINT "t_fk" FOREIGN KEY ...;
|
||||
```
|
||||
|
||||
Rename migration files meaningfully: `0046_meaningless.sql` → `0046_user_add_avatar.sql`
|
||||
|
|
|
|||
|
|
@ -24,17 +24,57 @@ Rename auto-generated filename to be meaningful:
|
|||
|
||||
## Step 3: Use Idempotent Clauses (Defensive Programming)
|
||||
|
||||
Always use defensive clauses to make migrations idempotent:
|
||||
Always use defensive clauses to make migrations idempotent (safe to re-run):
|
||||
|
||||
### CREATE TABLE
|
||||
|
||||
```sql
|
||||
-- ✅ Good: Idempotent operations
|
||||
-- ✅ Good
|
||||
CREATE TABLE IF NOT EXISTS "agent_eval_runs" (
|
||||
"id" text PRIMARY KEY NOT NULL,
|
||||
"name" text,
|
||||
"created_at" timestamp with time zone DEFAULT now() NOT NULL
|
||||
);
|
||||
|
||||
-- ❌ Bad
|
||||
CREATE TABLE "agent_eval_runs" (...);
|
||||
```
|
||||
|
||||
### ALTER TABLE - Columns
|
||||
|
||||
```sql
|
||||
-- ✅ Good
|
||||
ALTER TABLE "users" ADD COLUMN IF NOT EXISTS "avatar" text;
|
||||
DROP TABLE IF EXISTS "old_table";
|
||||
CREATE INDEX IF NOT EXISTS "users_email_idx" ON "users" ("email");
|
||||
ALTER TABLE "posts" DROP COLUMN IF EXISTS "deprecated_field";
|
||||
|
||||
-- ❌ Bad: Non-idempotent operations
|
||||
-- ❌ Bad
|
||||
ALTER TABLE "users" ADD COLUMN "avatar" text;
|
||||
```
|
||||
|
||||
### ALTER TABLE - Foreign Key Constraints
|
||||
|
||||
PostgreSQL has no `ADD CONSTRAINT IF NOT EXISTS`. Use `DROP IF EXISTS` + `ADD`:
|
||||
|
||||
```sql
|
||||
-- ✅ Good: Drop first, then add (idempotent)
|
||||
ALTER TABLE "agent_eval_datasets" DROP CONSTRAINT IF EXISTS "agent_eval_datasets_user_id_users_id_fk";
|
||||
ALTER TABLE "agent_eval_datasets" ADD CONSTRAINT "agent_eval_datasets_user_id_users_id_fk"
|
||||
FOREIGN KEY ("user_id") REFERENCES "public"."users"("id") ON DELETE cascade ON UPDATE no action;
|
||||
|
||||
-- ❌ Bad: Will fail if constraint already exists
|
||||
ALTER TABLE "agent_eval_datasets" ADD CONSTRAINT "agent_eval_datasets_user_id_users_id_fk"
|
||||
FOREIGN KEY ("user_id") REFERENCES "public"."users"("id") ON DELETE cascade ON UPDATE no action;
|
||||
```
|
||||
|
||||
### DROP TABLE / INDEX
|
||||
|
||||
```sql
|
||||
-- ✅ Good
|
||||
DROP TABLE IF EXISTS "old_table";
|
||||
CREATE INDEX IF NOT EXISTS "users_email_idx" ON "users" ("email");
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS "users_email_unique" ON "users" USING btree ("email");
|
||||
|
||||
-- ❌ Bad
|
||||
DROP TABLE "old_table";
|
||||
CREATE INDEX "users_email_idx" ON "users" ("email");
|
||||
```
|
||||
|
|
|
|||
|
|
@ -25,6 +25,10 @@ Brand: **Where Agents Collaborate** - Focus on collaborative agent system, not j
|
|||
| 资源 | Resource |
|
||||
| 库 | Library |
|
||||
| 模型服务商 | Provider |
|
||||
| 评测 | Evaluation |
|
||||
| 基准 | Benchmark |
|
||||
| 数据集 | Dataset |
|
||||
| 用例 | Test Case |
|
||||
|
||||
## Brand Principles
|
||||
|
||||
|
|
|
|||
624
.agents/skills/store-data-structures/SKILL.md
Normal file
624
.agents/skills/store-data-structures/SKILL.md
Normal file
|
|
@ -0,0 +1,624 @@
|
|||
---
|
||||
name: store-data-structures
|
||||
description: Zustand store data structure patterns for LobeHub. Covers List vs Detail data structures, Map + Reducer patterns, type definitions, and when to use each pattern. Use when designing store state, choosing data structures, or implementing list/detail pages.
|
||||
---
|
||||
|
||||
# LobeHub Store Data Structures
|
||||
|
||||
This guide covers how to structure data in Zustand stores for optimal performance and user experience.
|
||||
|
||||
## Core Principles
|
||||
|
||||
### ✅ DO
|
||||
|
||||
1. **Separate List and Detail** - Use different structures for list pages and detail pages
|
||||
2. **Use Map for Details** - Cache multiple detail pages with `Record<string, Detail>`
|
||||
3. **Use Array for Lists** - Simple arrays for list display
|
||||
4. **Types from @lobechat/types** - Never use `@lobechat/database` types in stores
|
||||
5. **Distinguish List and Detail types** - List types may have computed UI fields
|
||||
|
||||
### ❌ DON'T
|
||||
|
||||
1. **Don't use single detail object** - Can't cache multiple pages
|
||||
2. **Don't mix List and Detail types** - They have different purposes
|
||||
3. **Don't use database types** - Use types from `@lobechat/types`
|
||||
4. **Don't use Map for lists** - Simple arrays are sufficient
|
||||
|
||||
---
|
||||
|
||||
## Type Definitions
|
||||
|
||||
Types should be organized by entity in separate files:
|
||||
|
||||
```
|
||||
@lobechat/types/src/eval/
|
||||
├── benchmark.ts # Benchmark types
|
||||
├── agentEvalDataset.ts # Dataset types
|
||||
├── agentEvalRun.ts # Run types
|
||||
└── index.ts # Re-exports
|
||||
```
|
||||
|
||||
### Example: Benchmark Types
|
||||
|
||||
```typescript
|
||||
// packages/types/src/eval/benchmark.ts
|
||||
import type { EvalBenchmarkRubric } from './rubric';
|
||||
|
||||
// ============================================
|
||||
// Detail Type - Full entity (for detail pages)
|
||||
// ============================================
|
||||
|
||||
/**
|
||||
* Full benchmark entity with all fields including heavy data
|
||||
*/
|
||||
export interface AgentEvalBenchmark {
|
||||
createdAt: Date;
|
||||
description?: string | null;
|
||||
id: string;
|
||||
identifier: string;
|
||||
isSystem: boolean;
|
||||
metadata?: Record<string, unknown> | null;
|
||||
name: string;
|
||||
referenceUrl?: string | null;
|
||||
rubrics: EvalBenchmarkRubric[]; // Heavy field
|
||||
updatedAt: Date;
|
||||
}
|
||||
|
||||
// ============================================
|
||||
// List Type - Lightweight (for list display)
|
||||
// ============================================
|
||||
|
||||
/**
|
||||
* Lightweight benchmark item - excludes heavy fields
|
||||
* May include computed statistics for UI
|
||||
*/
|
||||
export interface AgentEvalBenchmarkListItem {
|
||||
createdAt: Date;
|
||||
description?: string | null;
|
||||
id: string;
|
||||
identifier: string;
|
||||
isSystem: boolean;
|
||||
name: string;
|
||||
// Note: rubrics NOT included (heavy field)
|
||||
|
||||
// Computed statistics for UI display
|
||||
datasetCount?: number;
|
||||
runCount?: number;
|
||||
testCaseCount?: number;
|
||||
}
|
||||
```
|
||||
|
||||
### Example: Document Types (with heavy content)
|
||||
|
||||
```typescript
|
||||
// packages/types/src/document.ts
|
||||
|
||||
/**
|
||||
* Full document entity - includes heavy content fields
|
||||
*/
|
||||
export interface Document {
|
||||
id: string;
|
||||
title: string;
|
||||
description?: string;
|
||||
content: string; // Heavy field - full markdown content
|
||||
editorData: any; // Heavy field - editor state
|
||||
metadata?: Record<string, unknown>;
|
||||
createdAt: Date;
|
||||
updatedAt: Date;
|
||||
}
|
||||
|
||||
/**
|
||||
* Lightweight document item - excludes heavy content
|
||||
*/
|
||||
export interface DocumentListItem {
|
||||
id: string;
|
||||
title: string;
|
||||
description?: string;
|
||||
// Note: content and editorData NOT included
|
||||
createdAt: Date;
|
||||
updatedAt: Date;
|
||||
|
||||
// Computed statistics
|
||||
wordCount?: number;
|
||||
lastEditedBy?: string;
|
||||
}
|
||||
```
|
||||
|
||||
**Key Points:**
|
||||
|
||||
- **Detail types** include ALL fields from database (full entity)
|
||||
- **List types** are **subsets** that exclude heavy/large fields
|
||||
- List types may add computed statistics for UI (e.g., `testCaseCount`)
|
||||
- **Each entity gets its own file** (not mixed together)
|
||||
- **All types** exported from `@lobechat/types`, NOT `@lobechat/database`
|
||||
|
||||
**Heavy fields to exclude from List:**
|
||||
|
||||
- Large text content (`content`, `editorData`, `fullDescription`)
|
||||
- Complex objects (`rubrics`, `config`, `metrics`)
|
||||
- Binary data (`image`, `file`)
|
||||
- Large arrays (`messages`, `items`)
|
||||
|
||||
---
|
||||
|
||||
## When to Use Map vs Array
|
||||
|
||||
### Use Map + Reducer (for Detail Data)
|
||||
|
||||
✅ **Detail page data caching** - Cache multiple detail pages simultaneously
|
||||
✅ **Optimistic updates** - Update UI before API responds
|
||||
✅ **Per-item loading states** - Track which items are being updated
|
||||
✅ **Multiple pages open** - User can navigate between details without refetching
|
||||
|
||||
**Structure:**
|
||||
|
||||
```typescript
|
||||
benchmarkDetailMap: Record<string, AgentEvalBenchmark>;
|
||||
```
|
||||
|
||||
**Example:** Benchmark detail pages, Dataset detail pages, User profiles
|
||||
|
||||
### Use Simple Array (for List Data)
|
||||
|
||||
✅ **List display** - Lists, tables, cards
|
||||
✅ **Read-only or refresh-as-whole** - Entire list refreshes together
|
||||
✅ **No per-item updates** - No need to update individual items
|
||||
✅ **Simple data flow** - Easier to understand and maintain
|
||||
|
||||
**Structure:**
|
||||
|
||||
```typescript
|
||||
benchmarkList: AgentEvalBenchmarkListItem[]
|
||||
```
|
||||
|
||||
**Example:** Benchmark list, Dataset list, User list
|
||||
|
||||
---
|
||||
|
||||
## State Structure Pattern
|
||||
|
||||
### Complete Example
|
||||
|
||||
```typescript
|
||||
// packages/types/src/eval/benchmark.ts
|
||||
import type { EvalBenchmarkRubric } from './rubric';
|
||||
|
||||
/**
|
||||
* Full benchmark entity (for detail pages)
|
||||
*/
|
||||
export interface AgentEvalBenchmark {
|
||||
id: string;
|
||||
name: string;
|
||||
description?: string | null;
|
||||
identifier: string;
|
||||
rubrics: EvalBenchmarkRubric[]; // Heavy field
|
||||
metadata?: Record<string, unknown> | null;
|
||||
isSystem: boolean;
|
||||
createdAt: Date;
|
||||
updatedAt: Date;
|
||||
}
|
||||
|
||||
/**
|
||||
* Lightweight benchmark (for list display)
|
||||
* Excludes heavy fields like rubrics
|
||||
*/
|
||||
export interface AgentEvalBenchmarkListItem {
|
||||
id: string;
|
||||
name: string;
|
||||
description?: string | null;
|
||||
identifier: string;
|
||||
isSystem: boolean;
|
||||
createdAt: Date;
|
||||
// Note: rubrics excluded
|
||||
|
||||
// Computed statistics
|
||||
testCaseCount?: number;
|
||||
datasetCount?: number;
|
||||
runCount?: number;
|
||||
}
|
||||
```
|
||||
|
||||
```typescript
|
||||
// src/store/eval/slices/benchmark/initialState.ts
|
||||
import type { AgentEvalBenchmark, AgentEvalBenchmarkListItem } from '@lobechat/types';
|
||||
|
||||
export interface BenchmarkSliceState {
|
||||
// ============================================
|
||||
// List Data - Simple Array
|
||||
// ============================================
|
||||
/**
|
||||
* List of benchmarks for list page display
|
||||
* May include computed fields like testCaseCount
|
||||
*/
|
||||
benchmarkList: AgentEvalBenchmarkListItem[];
|
||||
benchmarkListInit: boolean;
|
||||
|
||||
// ============================================
|
||||
// Detail Data - Map for Caching
|
||||
// ============================================
|
||||
/**
|
||||
* Map of benchmark details keyed by ID
|
||||
* Caches detail page data for multiple benchmarks
|
||||
* Enables optimistic updates and per-item loading
|
||||
*/
|
||||
benchmarkDetailMap: Record<string, AgentEvalBenchmark>;
|
||||
|
||||
/**
|
||||
* Track which benchmark details are being loaded/updated
|
||||
* For showing spinners on specific items
|
||||
*/
|
||||
loadingBenchmarkDetailIds: string[];
|
||||
|
||||
// ============================================
|
||||
// Mutation States
|
||||
// ============================================
|
||||
isCreatingBenchmark: boolean;
|
||||
isUpdatingBenchmark: boolean;
|
||||
isDeletingBenchmark: boolean;
|
||||
}
|
||||
|
||||
export const benchmarkInitialState: BenchmarkSliceState = {
|
||||
benchmarkList: [],
|
||||
benchmarkListInit: false,
|
||||
benchmarkDetailMap: {},
|
||||
loadingBenchmarkDetailIds: [],
|
||||
isCreatingBenchmark: false,
|
||||
isUpdatingBenchmark: false,
|
||||
isDeletingBenchmark: false,
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Reducer Pattern (for Detail Map)
|
||||
|
||||
### Why Use Reducer?
|
||||
|
||||
- **Immutable updates** - Immer ensures immutability
|
||||
- **Type-safe actions** - TypeScript discriminated unions
|
||||
- **Testable** - Pure functions easy to test
|
||||
- **Reusable** - Same reducer for optimistic updates and server data
|
||||
|
||||
### Reducer Structure
|
||||
|
||||
```typescript
|
||||
// src/store/eval/slices/benchmark/reducer.ts
|
||||
import { produce } from 'immer';
|
||||
import type { AgentEvalBenchmark } from '@lobechat/types';
|
||||
|
||||
// ============================================
|
||||
// Action Types
|
||||
// ============================================
|
||||
|
||||
type SetBenchmarkDetailAction = {
|
||||
id: string;
|
||||
type: 'setBenchmarkDetail';
|
||||
value: AgentEvalBenchmark;
|
||||
};
|
||||
|
||||
type UpdateBenchmarkDetailAction = {
|
||||
id: string;
|
||||
type: 'updateBenchmarkDetail';
|
||||
value: Partial<AgentEvalBenchmark>;
|
||||
};
|
||||
|
||||
type DeleteBenchmarkDetailAction = {
|
||||
id: string;
|
||||
type: 'deleteBenchmarkDetail';
|
||||
};
|
||||
|
||||
export type BenchmarkDetailDispatch =
|
||||
| SetBenchmarkDetailAction
|
||||
| UpdateBenchmarkDetailAction
|
||||
| DeleteBenchmarkDetailAction;
|
||||
|
||||
// ============================================
|
||||
// Reducer Function
|
||||
// ============================================
|
||||
|
||||
export const benchmarkDetailReducer = (
|
||||
state: Record<string, AgentEvalBenchmark> = {},
|
||||
payload: BenchmarkDetailDispatch,
|
||||
): Record<string, AgentEvalBenchmark> => {
|
||||
switch (payload.type) {
|
||||
case 'setBenchmarkDetail': {
|
||||
return produce(state, (draft) => {
|
||||
draft[payload.id] = payload.value;
|
||||
});
|
||||
}
|
||||
|
||||
case 'updateBenchmarkDetail': {
|
||||
return produce(state, (draft) => {
|
||||
if (draft[payload.id]) {
|
||||
draft[payload.id] = { ...draft[payload.id], ...payload.value };
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
case 'deleteBenchmarkDetail': {
|
||||
return produce(state, (draft) => {
|
||||
delete draft[payload.id];
|
||||
});
|
||||
}
|
||||
|
||||
default:
|
||||
return state;
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### Internal Dispatch Methods
|
||||
|
||||
```typescript
|
||||
// In action.ts
|
||||
export interface BenchmarkAction {
|
||||
// ... other methods ...
|
||||
|
||||
// Internal methods - not for direct UI use
|
||||
internal_dispatchBenchmarkDetail: (payload: BenchmarkDetailDispatch) => void;
|
||||
internal_updateBenchmarkDetailLoading: (id: string, loading: boolean) => void;
|
||||
}
|
||||
|
||||
export const createBenchmarkSlice: StateCreator<...> = (set, get) => ({
|
||||
// ... other methods ...
|
||||
|
||||
// Internal - Dispatch to reducer
|
||||
internal_dispatchBenchmarkDetail: (payload) => {
|
||||
const currentMap = get().benchmarkDetailMap;
|
||||
const nextMap = benchmarkDetailReducer(currentMap, payload);
|
||||
|
||||
// Only update if changed
|
||||
if (isEqual(nextMap, currentMap)) return;
|
||||
|
||||
set(
|
||||
{ benchmarkDetailMap: nextMap },
|
||||
false,
|
||||
`dispatchBenchmarkDetail/${payload.type}`,
|
||||
);
|
||||
},
|
||||
|
||||
// Internal - Update loading state
|
||||
internal_updateBenchmarkDetailLoading: (id, loading) => {
|
||||
set(
|
||||
(state) => {
|
||||
if (loading) {
|
||||
return { loadingBenchmarkDetailIds: [...state.loadingBenchmarkDetailIds, id] };
|
||||
}
|
||||
return {
|
||||
loadingBenchmarkDetailIds: state.loadingBenchmarkDetailIds.filter((i) => i !== id),
|
||||
};
|
||||
},
|
||||
false,
|
||||
'updateBenchmarkDetailLoading',
|
||||
);
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Data Structure Comparison
|
||||
|
||||
### ❌ WRONG - Single Detail Object
|
||||
|
||||
```typescript
|
||||
interface BenchmarkSliceState {
|
||||
// ❌ Can only cache one detail
|
||||
benchmarkDetail: AgentEvalBenchmark | null;
|
||||
|
||||
// ❌ Global loading state
|
||||
isLoadingBenchmarkDetail: boolean;
|
||||
}
|
||||
```
|
||||
|
||||
**Problems:**
|
||||
|
||||
- Can only cache one detail page at a time
|
||||
- Switching between details causes unnecessary refetches
|
||||
- No optimistic updates
|
||||
- No per-item loading states
|
||||
|
||||
### ✅ CORRECT - Separate List and Detail
|
||||
|
||||
```typescript
|
||||
import type { AgentEvalBenchmark, AgentEvalBenchmarkListItem } from '@lobechat/types';
|
||||
|
||||
interface BenchmarkSliceState {
|
||||
// ✅ List data - simple array
|
||||
benchmarkList: AgentEvalBenchmarkListItem[];
|
||||
benchmarkListInit: boolean;
|
||||
|
||||
// ✅ Detail data - map for caching
|
||||
benchmarkDetailMap: Record<string, AgentEvalBenchmark>;
|
||||
|
||||
// ✅ Per-item loading
|
||||
loadingBenchmarkDetailIds: string[];
|
||||
|
||||
// ✅ Mutation states
|
||||
isCreatingBenchmark: boolean;
|
||||
isUpdatingBenchmark: boolean;
|
||||
isDeletingBenchmark: boolean;
|
||||
}
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
|
||||
- Cache multiple detail pages
|
||||
- Fast navigation between cached details
|
||||
- Optimistic updates with reducer
|
||||
- Per-item loading states
|
||||
- Clear separation of concerns
|
||||
|
||||
---
|
||||
|
||||
## Component Usage
|
||||
|
||||
### Accessing List Data
|
||||
|
||||
```typescript
|
||||
const BenchmarkList = () => {
|
||||
// Simple array access
|
||||
const benchmarks = useEvalStore((s) => s.benchmarkList);
|
||||
const isInit = useEvalStore((s) => s.benchmarkListInit);
|
||||
|
||||
if (!isInit) return <Loading />;
|
||||
|
||||
return (
|
||||
<div>
|
||||
{benchmarks.map(b => (
|
||||
<BenchmarkCard
|
||||
key={b.id}
|
||||
name={b.name}
|
||||
testCaseCount={b.testCaseCount} // Computed field
|
||||
/>
|
||||
))}
|
||||
</div>
|
||||
);
|
||||
};
|
||||
```
|
||||
|
||||
### Accessing Detail Data
|
||||
|
||||
```typescript
|
||||
const BenchmarkDetail = () => {
|
||||
const { benchmarkId } = useParams<{ benchmarkId: string }>();
|
||||
|
||||
// Get from map
|
||||
const benchmark = useEvalStore((s) =>
|
||||
benchmarkId ? s.benchmarkDetailMap[benchmarkId] : undefined,
|
||||
);
|
||||
|
||||
// Check loading
|
||||
const isLoading = useEvalStore((s) =>
|
||||
benchmarkId ? s.loadingBenchmarkDetailIds.includes(benchmarkId) : false,
|
||||
);
|
||||
|
||||
if (!benchmark) return <Loading />;
|
||||
|
||||
return (
|
||||
<div>
|
||||
<h1>{benchmark.name}</h1>
|
||||
{isLoading && <Spinner />}
|
||||
</div>
|
||||
);
|
||||
};
|
||||
```
|
||||
|
||||
### Using Selectors (Recommended)
|
||||
|
||||
```typescript
|
||||
// src/store/eval/slices/benchmark/selectors.ts
|
||||
export const benchmarkSelectors = {
|
||||
getBenchmarkDetail: (id: string) => (s: EvalStore) => s.benchmarkDetailMap[id],
|
||||
|
||||
isLoadingBenchmarkDetail: (id: string) => (s: EvalStore) =>
|
||||
s.loadingBenchmarkDetailIds.includes(id),
|
||||
};
|
||||
|
||||
// In component
|
||||
const benchmark = useEvalStore(benchmarkSelectors.getBenchmarkDetail(benchmarkId!));
|
||||
const isLoading = useEvalStore(benchmarkSelectors.isLoadingBenchmarkDetail(benchmarkId!));
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Decision Tree
|
||||
|
||||
```
|
||||
Need to store data?
|
||||
│
|
||||
├─ Is it a LIST for display?
|
||||
│ └─ ✅ Use simple array: `xxxList: XxxListItem[]`
|
||||
│ - May include computed fields
|
||||
│ - Refreshed as a whole
|
||||
│ - No optimistic updates needed
|
||||
│
|
||||
└─ Is it DETAIL page data?
|
||||
└─ ✅ Use Map: `xxxDetailMap: Record<string, Xxx>`
|
||||
- Cache multiple details
|
||||
- Support optimistic updates
|
||||
- Per-item loading states
|
||||
- Requires reducer for mutations
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Checklist
|
||||
|
||||
When designing store state structure:
|
||||
|
||||
- [ ] **Organize types by entity** in separate files (e.g., `benchmark.ts`, `agentEvalDataset.ts`)
|
||||
- [ ] Create **Detail** type (full entity with all fields including heavy ones)
|
||||
- [ ] Create **ListItem** type:
|
||||
- [ ] Subset of Detail type (exclude heavy fields)
|
||||
- [ ] May include computed statistics for UI
|
||||
- [ ] **NOT** extending Detail type (it's a subset, not extension)
|
||||
- [ ] Use **array** for list data: `xxxList: XxxListItem[]`
|
||||
- [ ] Use **Map** for detail data: `xxxDetailMap: Record<string, Xxx>`
|
||||
- [ ] Add per-item loading: `loadingXxxDetailIds: string[]`
|
||||
- [ ] Create **reducer** for detail map if optimistic updates needed
|
||||
- [ ] Add **internal dispatch** and **loading** methods
|
||||
- [ ] Create **selectors** for clean access (optional but recommended)
|
||||
- [ ] Document in comments:
|
||||
- [ ] What fields are excluded from List and why
|
||||
- [ ] What computed fields mean
|
||||
- [ ] What each Map is for
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **File organization** - One entity per file, not mixed together
|
||||
2. **List is subset** - ListItem excludes heavy fields, not extends Detail
|
||||
3. **Clear naming** - `xxxList` for arrays, `xxxDetailMap` for maps
|
||||
4. **Consistent patterns** - All detail maps follow same structure
|
||||
5. **Type safety** - Never use `any`, always use proper types
|
||||
6. **Document exclusions** - Comment which fields are excluded from List and why
|
||||
7. **Selectors** - Encapsulate access patterns
|
||||
8. **Loading states** - Per-item for details, global for lists
|
||||
9. **Immutability** - Use Immer in reducers
|
||||
|
||||
### Common Mistakes to Avoid
|
||||
|
||||
❌ **DON'T extend Detail in List:**
|
||||
|
||||
```typescript
|
||||
// Wrong - List should not extend Detail
|
||||
export interface BenchmarkListItem extends Benchmark {
|
||||
testCaseCount?: number;
|
||||
}
|
||||
```
|
||||
|
||||
✅ **DO create separate subset:**
|
||||
|
||||
```typescript
|
||||
// Correct - List is a subset with computed fields
|
||||
export interface BenchmarkListItem {
|
||||
id: string;
|
||||
name: string;
|
||||
// ... only necessary fields
|
||||
testCaseCount?: number; // Computed
|
||||
}
|
||||
```
|
||||
|
||||
❌ **DON'T mix entities in one file:**
|
||||
|
||||
```typescript
|
||||
// Wrong - all entities in agentEvalEntities.ts
|
||||
```
|
||||
|
||||
✅ **DO separate by entity:**
|
||||
|
||||
```typescript
|
||||
// Correct - separate files
|
||||
// benchmark.ts
|
||||
// agentEvalDataset.ts
|
||||
// agentEvalRun.ts
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `data-fetching` - How to fetch and update this data
|
||||
- `zustand` - General Zustand patterns
|
||||
1120
.agents/skills/upstash-workflow/SKILL.md
Normal file
1120
.agents/skills/upstash-workflow/SKILL.md
Normal file
File diff suppressed because it is too large
Load diff
369
.agents/skills/upstash-workflow/reference/cloud.md
Normal file
369
.agents/skills/upstash-workflow/reference/cloud.md
Normal file
|
|
@ -0,0 +1,369 @@
|
|||
# Cloud Project Workflow Configuration
|
||||
|
||||
This document covers cloud-specific workflow configurations and patterns for the lobehub-cloud project.
|
||||
|
||||
## Overview
|
||||
|
||||
The lobehub-cloud project extends the open-source lobehub codebase with cloud-specific features. Workflows can be implemented in either:
|
||||
|
||||
1. **Lobehub (open-source)** - Available to all users
|
||||
2. **Lobehub-cloud (proprietary)** - Cloud-specific business logic
|
||||
|
||||
---
|
||||
|
||||
## Directory Structure
|
||||
|
||||
### Lobehub Submodule (Open-source)
|
||||
|
||||
```
|
||||
lobehub/
|
||||
└── src/
|
||||
├── app/(backend)/api/workflows/
|
||||
│ ├── memory-user-memory/ # Memory extraction workflows
|
||||
│ └── agent-eval-run/ # Benchmark evaluation workflows
|
||||
└── server/workflows/
|
||||
├── agentEvalRun/
|
||||
└── ...
|
||||
```
|
||||
|
||||
### Lobehub-cloud (Proprietary)
|
||||
|
||||
```
|
||||
lobehub-cloud/
|
||||
└── src/
|
||||
├── app/(backend)/api/workflows/
|
||||
│ ├── welcome-placeholder/ # Cloud-only: AI placeholder generation
|
||||
│ ├── agent-welcome/ # Cloud-only: Agent welcome messages
|
||||
│ ├── agent-eval-run/ # Re-export from lobehub
|
||||
│ └── memory-user-memory/ # Re-export from lobehub
|
||||
└── server/workflows/
|
||||
├── welcomePlaceholder/
|
||||
├── agentWelcome/
|
||||
└── agentEvalRun/ # Re-export from lobehub
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Cloud-Specific Patterns
|
||||
|
||||
### Pattern 1: Cloud-Only Workflows
|
||||
|
||||
**Use Case**: Features exclusive to cloud users (AI generation, premium features)
|
||||
|
||||
**Example**: `welcome-placeholder`, `agent-welcome`
|
||||
|
||||
**Implementation**:
|
||||
- Implement directly in `lobehub-cloud/src/app/(backend)/api/workflows/`
|
||||
- No need for re-exports
|
||||
- Can use cloud-specific packages and services
|
||||
|
||||
**Structure**:
|
||||
```
|
||||
lobehub-cloud/src/
|
||||
├── app/(backend)/api/workflows/
|
||||
│ └── feature-name/
|
||||
│ ├── process-items/route.ts
|
||||
│ ├── paginate-items/route.ts
|
||||
│ └── execute-item/route.ts
|
||||
└── server/workflows/
|
||||
└── featureName/
|
||||
└── index.ts
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Pattern 2: Re-export from Lobehub
|
||||
|
||||
**Use Case**: Workflows implemented in open-source but also used in cloud
|
||||
|
||||
**Example**: `agent-eval-run`, `memory-user-memory`
|
||||
|
||||
**Why Re-export?**
|
||||
- Cloud deployment needs to serve these endpoints
|
||||
- Lobehub submodule code is not directly accessible in cloud routes
|
||||
- Allows cloud-specific overrides if needed in the future
|
||||
|
||||
#### Re-export Implementation
|
||||
|
||||
**Step 1**: Implement workflow in lobehub submodule
|
||||
|
||||
```typescript
|
||||
// lobehub/src/app/(backend)/api/workflows/feature/layer/route.ts
|
||||
import { serve } from '@upstash/workflow/nextjs';
|
||||
|
||||
export const { POST } = serve<Payload>(
|
||||
async (context) => {
|
||||
// Implementation
|
||||
},
|
||||
{ flowControl: { ... } }
|
||||
);
|
||||
```
|
||||
|
||||
**Step 2**: Create re-export in lobehub-cloud
|
||||
|
||||
```typescript
|
||||
// lobehub-cloud/src/app/(backend)/api/workflows/feature/layer/route.ts
|
||||
export { POST } from 'lobehub/src/app/(backend)/api/workflows/feature/layer/route';
|
||||
```
|
||||
|
||||
**Important**: Use `lobehub/src/...` path, NOT `@/...` to avoid circular imports.
|
||||
|
||||
#### Re-export Directory Structure
|
||||
|
||||
```bash
|
||||
# Create directories
|
||||
mkdir -p lobehub-cloud/src/app/(backend)/api/workflows/feature-name/layer-1
|
||||
mkdir -p lobehub-cloud/src/app/(backend)/api/workflows/feature-name/layer-2
|
||||
mkdir -p lobehub-cloud/src/app/(backend)/api/workflows/feature-name/layer-3
|
||||
|
||||
# Create re-export files
|
||||
echo "export { POST } from 'lobehub/src/app/(backend)/api/workflows/feature-name/layer-1/route';" > \
|
||||
lobehub-cloud/src/app/(backend)/api/workflows/feature-name/layer-1/route.ts
|
||||
|
||||
echo "export { POST } from 'lobehub/src/app/(backend)/api/workflows/feature-name/layer-2/route';" > \
|
||||
lobehub-cloud/src/app/(backend)/api/workflows/feature-name/layer-2/route.ts
|
||||
|
||||
echo "export { POST } from 'lobehub/src/app/(backend)/api/workflows/feature-name/layer-3/route';" > \
|
||||
lobehub-cloud/src/app/(backend)/api/workflows/feature-name/layer-3/route.ts
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## TypeScript Path Mappings
|
||||
|
||||
The cloud project uses tsconfig path mappings to override lobehub code:
|
||||
|
||||
```json
|
||||
// lobehub-cloud/tsconfig.json
|
||||
{
|
||||
"compilerOptions": {
|
||||
"paths": {
|
||||
"@/*": ["./src/*", "./lobehub/src/*"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Resolution Order**:
|
||||
1. `./src/*` (cloud code) - checked first
|
||||
2. `./lobehub/src/*` (open-source) - fallback
|
||||
|
||||
This allows cloud to override specific modules while using lobehub defaults.
|
||||
|
||||
---
|
||||
|
||||
## Workflow Class Location
|
||||
|
||||
### Cloud-Only Workflows
|
||||
|
||||
Place workflow class in cloud:
|
||||
|
||||
```
|
||||
lobehub-cloud/src/server/workflows/featureName/index.ts
|
||||
```
|
||||
|
||||
### Shared Workflows
|
||||
|
||||
Place workflow class in lobehub, re-export in cloud if needed:
|
||||
|
||||
```
|
||||
lobehub/src/server/workflows/featureName/index.ts
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Environment Variables
|
||||
|
||||
Both lobehub and cloud workflows require:
|
||||
|
||||
```bash
|
||||
# Required for all workflows
|
||||
APP_URL=https://your-app.com # Base URL for workflow endpoints
|
||||
QSTASH_TOKEN=qstash_xxx # QStash authentication token
|
||||
|
||||
# Optional (for custom QStash URL)
|
||||
QSTASH_URL=https://custom-qstash.com # Custom QStash endpoint
|
||||
```
|
||||
|
||||
**Cloud-Specific**:
|
||||
```bash
|
||||
# Cloud database (for monetization features)
|
||||
CLOUD_DATABASE_URL=postgresql://...
|
||||
|
||||
# Cloud-specific services
|
||||
REDIS_URL=redis://...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Decide: Cloud or Open-Source?
|
||||
|
||||
**Implement in Lobehub if**:
|
||||
- Feature is useful for all LobeChat users
|
||||
- No proprietary business logic
|
||||
- Can be open-sourced
|
||||
|
||||
**Implement in Cloud if**:
|
||||
- Premium/paid feature
|
||||
- Uses cloud-specific services
|
||||
- Contains proprietary algorithms
|
||||
|
||||
### 2. Re-export Pattern
|
||||
|
||||
✅ **Do**:
|
||||
```typescript
|
||||
// Simple re-export
|
||||
export { POST } from 'lobehub/src/app/(backend)/api/workflows/feature/route';
|
||||
```
|
||||
|
||||
❌ **Don't**:
|
||||
```typescript
|
||||
// Avoid circular imports with @/ path
|
||||
export { POST } from '@/app/(backend)/api/workflows/feature/route'; // ❌
|
||||
```
|
||||
|
||||
### 3. Keep Workflow Logic in Lobehub
|
||||
|
||||
For shared features:
|
||||
- Implement core logic in `lobehub/` (open-source)
|
||||
- Only override if cloud needs different behavior
|
||||
- Use re-exports for cloud deployment
|
||||
|
||||
### 4. Directory Naming
|
||||
|
||||
Follow consistent naming across lobehub and cloud:
|
||||
|
||||
```
|
||||
# Both should use same structure
|
||||
lobehub/src/app/(backend)/api/workflows/feature-name/
|
||||
lobehub-cloud/src/app/(backend)/api/workflows/feature-name/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Migration Guide
|
||||
|
||||
### Moving Workflow from Cloud to Lobehub
|
||||
|
||||
**Step 1**: Copy workflow to lobehub
|
||||
```bash
|
||||
cp -r lobehub-cloud/src/app/(backend)/api/workflows/feature \
|
||||
lobehub/src/app/(backend)/api/workflows/
|
||||
```
|
||||
|
||||
**Step 2**: Remove cloud-specific dependencies
|
||||
- Replace cloud services with generic interfaces
|
||||
- Remove proprietary business logic
|
||||
- Update imports to use lobehub paths
|
||||
|
||||
**Step 3**: Create re-exports in cloud
|
||||
```typescript
|
||||
// lobehub-cloud/src/app/(backend)/api/workflows/feature/*/route.ts
|
||||
export { POST } from 'lobehub/src/app/(backend)/api/workflows/feature/*/route';
|
||||
```
|
||||
|
||||
**Step 4**: Move workflow class to lobehub
|
||||
```bash
|
||||
mv lobehub-cloud/src/server/workflows/feature \
|
||||
lobehub/src/server/workflows/
|
||||
```
|
||||
|
||||
**Step 5**: Update cloud imports
|
||||
```typescript
|
||||
// Change from
|
||||
import { Workflow } from '@/server/workflows/feature';
|
||||
|
||||
// To
|
||||
import { Workflow } from 'lobehub/src/server/workflows/feature';
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Cloud-Only Workflow: welcome-placeholder
|
||||
|
||||
**Location**: `lobehub-cloud/src/app/(backend)/api/workflows/welcome-placeholder/`
|
||||
|
||||
**Why Cloud-Only**: Uses proprietary AI generation service and Redis caching
|
||||
|
||||
**Structure**:
|
||||
```
|
||||
lobehub-cloud/
|
||||
├── src/app/(backend)/api/workflows/welcome-placeholder/
|
||||
│ ├── process-users/route.ts
|
||||
│ ├── paginate-users/route.ts
|
||||
│ └── generate-user/route.ts
|
||||
└── src/server/workflows/welcomePlaceholder/
|
||||
└── index.ts
|
||||
```
|
||||
|
||||
### Re-exported Workflow: agent-eval-run
|
||||
|
||||
**Location**:
|
||||
- Implementation: `lobehub/src/app/(backend)/api/workflows/agent-eval-run/`
|
||||
- Re-export: `lobehub-cloud/src/app/(backend)/api/workflows/agent-eval-run/`
|
||||
|
||||
**Why Re-export**: Core feature available in open-source, also used by cloud
|
||||
|
||||
**Cloud Re-export Files**:
|
||||
```typescript
|
||||
// lobehub-cloud/src/app/(backend)/api/workflows/agent-eval-run/run-benchmark/route.ts
|
||||
export { POST } from 'lobehub/src/app/(backend)/api/workflows/agent-eval-run/run-benchmark/route';
|
||||
|
||||
// lobehub-cloud/src/app/(backend)/api/workflows/agent-eval-run/paginate-test-cases/route.ts
|
||||
export { POST } from 'lobehub/src/app/(backend)/api/workflows/agent-eval-run/paginate-test-cases/route';
|
||||
|
||||
// ... (all layers)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Circular Import Error
|
||||
|
||||
**Error**: `Circular definition of import alias 'POST'`
|
||||
|
||||
**Cause**: Using `@/` path in re-export within cloud codebase
|
||||
|
||||
**Solution**: Use `lobehub/src/` path instead
|
||||
```typescript
|
||||
// ❌ Wrong
|
||||
export { POST } from '@/app/(backend)/api/workflows/feature/route';
|
||||
|
||||
// ✅ Correct
|
||||
export { POST } from 'lobehub/src/app/(backend)/api/workflows/feature/route';
|
||||
```
|
||||
|
||||
### Workflow Not Found (404)
|
||||
|
||||
**Cause**: Missing re-export in cloud
|
||||
|
||||
**Solution**: Create re-export files for all workflow layers
|
||||
```bash
|
||||
# Check if re-export exists
|
||||
ls lobehub-cloud/src/app/\(backend\)/api/workflows/feature-name/
|
||||
|
||||
# If missing, create re-exports
|
||||
mkdir -p lobehub-cloud/src/app/\(backend\)/api/workflows/feature-name/layer
|
||||
echo "export { POST } from 'lobehub/src/app/(backend)/api/workflows/feature-name/layer/route';" > \
|
||||
lobehub-cloud/src/app/\(backend\)/api/workflows/feature-name/layer/route.ts
|
||||
```
|
||||
|
||||
### Type Errors After Moving to Lobehub
|
||||
|
||||
**Cause**: Cloud-specific types or services used in lobehub code
|
||||
|
||||
**Solution**:
|
||||
1. Extract cloud-specific logic to cloud-only wrapper
|
||||
2. Use dependency injection for services
|
||||
3. Define generic interfaces in lobehub
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [SKILL.md](../SKILL.md) - Standard workflow patterns
|
||||
|
|
@ -102,6 +102,107 @@ table agent_cron_jobs {
|
|||
}
|
||||
}
|
||||
|
||||
table agent_eval_benchmarks {
|
||||
id text [pk, not null]
|
||||
identifier text [not null]
|
||||
name text [not null]
|
||||
description text
|
||||
rubrics jsonb [not null]
|
||||
reference_url text
|
||||
metadata jsonb
|
||||
is_system boolean [not null, default: true]
|
||||
accessed_at "timestamp with time zone" [not null, default: `now()`]
|
||||
created_at "timestamp with time zone" [not null, default: `now()`]
|
||||
updated_at "timestamp with time zone" [not null, default: `now()`]
|
||||
|
||||
indexes {
|
||||
identifier [name: 'agent_eval_benchmarks_identifier_unique', unique]
|
||||
is_system [name: 'agent_eval_benchmarks_is_system_idx']
|
||||
}
|
||||
}
|
||||
|
||||
table agent_eval_datasets {
|
||||
id text [pk, not null]
|
||||
benchmark_id text [not null]
|
||||
identifier text [not null]
|
||||
user_id text
|
||||
name text [not null]
|
||||
description text
|
||||
eval_mode text
|
||||
eval_config jsonb
|
||||
metadata jsonb
|
||||
accessed_at "timestamp with time zone" [not null, default: `now()`]
|
||||
created_at "timestamp with time zone" [not null, default: `now()`]
|
||||
updated_at "timestamp with time zone" [not null, default: `now()`]
|
||||
|
||||
indexes {
|
||||
(identifier, user_id) [name: 'agent_eval_datasets_identifier_user_id_unique', unique]
|
||||
benchmark_id [name: 'agent_eval_datasets_benchmark_id_idx']
|
||||
user_id [name: 'agent_eval_datasets_user_id_idx']
|
||||
}
|
||||
}
|
||||
|
||||
table agent_eval_run_topics {
|
||||
user_id text [not null]
|
||||
run_id text [not null]
|
||||
topic_id text [not null]
|
||||
test_case_id text [not null]
|
||||
status text
|
||||
score real
|
||||
passed boolean
|
||||
eval_result jsonb
|
||||
created_at "timestamp with time zone" [not null, default: `now()`]
|
||||
|
||||
indexes {
|
||||
(run_id, topic_id) [pk]
|
||||
user_id [name: 'agent_eval_run_topics_user_id_idx']
|
||||
run_id [name: 'agent_eval_run_topics_run_id_idx']
|
||||
test_case_id [name: 'agent_eval_run_topics_test_case_id_idx']
|
||||
}
|
||||
}
|
||||
|
||||
table agent_eval_runs {
|
||||
id text [pk, not null]
|
||||
dataset_id text [not null]
|
||||
target_agent_id text
|
||||
user_id text [not null]
|
||||
name text
|
||||
status text [not null, default: 'idle']
|
||||
config jsonb
|
||||
metrics jsonb
|
||||
started_at "timestamp with time zone"
|
||||
accessed_at "timestamp with time zone" [not null, default: `now()`]
|
||||
created_at "timestamp with time zone" [not null, default: `now()`]
|
||||
updated_at "timestamp with time zone" [not null, default: `now()`]
|
||||
|
||||
indexes {
|
||||
dataset_id [name: 'agent_eval_runs_dataset_id_idx']
|
||||
user_id [name: 'agent_eval_runs_user_id_idx']
|
||||
status [name: 'agent_eval_runs_status_idx']
|
||||
target_agent_id [name: 'agent_eval_runs_target_agent_id_idx']
|
||||
}
|
||||
}
|
||||
|
||||
table agent_eval_test_cases {
|
||||
id text [pk, not null]
|
||||
user_id text [not null]
|
||||
dataset_id text [not null]
|
||||
content jsonb [not null]
|
||||
eval_mode text
|
||||
eval_config jsonb
|
||||
metadata jsonb
|
||||
sort_order integer
|
||||
accessed_at "timestamp with time zone" [not null, default: `now()`]
|
||||
created_at "timestamp with time zone" [not null, default: `now()`]
|
||||
updated_at "timestamp with time zone" [not null, default: `now()`]
|
||||
|
||||
indexes {
|
||||
user_id [name: 'agent_eval_test_cases_user_id_idx']
|
||||
dataset_id [name: 'agent_eval_test_cases_dataset_id_idx']
|
||||
sort_order [name: 'agent_eval_test_cases_sort_order_idx']
|
||||
}
|
||||
}
|
||||
|
||||
table agent_skills {
|
||||
id text [pk, not null]
|
||||
name text [not null]
|
||||
|
|
@ -1198,6 +1299,7 @@ table threads {
|
|||
(client_id, user_id) [name: 'threads_client_id_user_id_unique', unique]
|
||||
user_id [name: 'threads_user_id_idx']
|
||||
topic_id [name: 'threads_topic_id_idx']
|
||||
type [name: 'threads_type_idx']
|
||||
agent_id [name: 'threads_agent_id_idx']
|
||||
group_id [name: 'threads_group_id_idx']
|
||||
parent_thread_id [name: 'threads_parent_thread_id_idx']
|
||||
|
|
@ -1260,6 +1362,7 @@ table topics {
|
|||
session_id [name: 'topics_session_id_idx']
|
||||
group_id [name: 'topics_group_id_idx']
|
||||
agent_id [name: 'topics_agent_id_idx']
|
||||
trigger [name: 'topics_trigger_idx']
|
||||
() [name: 'topics_extract_status_gin_idx']
|
||||
}
|
||||
}
|
||||
|
|
@ -1563,6 +1666,24 @@ ref: auth_sessions.user_id > users.id
|
|||
|
||||
ref: two_factor.user_id > users.id
|
||||
|
||||
ref: agent_eval_datasets.benchmark_id > agent_eval_benchmarks.id
|
||||
|
||||
ref: agent_eval_datasets.user_id - users.id
|
||||
|
||||
ref: agent_eval_run_topics.run_id > agent_eval_runs.id
|
||||
|
||||
ref: agent_eval_run_topics.topic_id - topics.id
|
||||
|
||||
ref: agent_eval_run_topics.test_case_id > agent_eval_test_cases.id
|
||||
|
||||
ref: agent_eval_runs.dataset_id > agent_eval_datasets.id
|
||||
|
||||
ref: agent_eval_runs.target_agent_id - agents.id
|
||||
|
||||
ref: agent_eval_runs.user_id - users.id
|
||||
|
||||
ref: agent_eval_test_cases.dataset_id > agent_eval_datasets.id
|
||||
|
||||
ref: agents_files.file_id > files.id
|
||||
|
||||
ref: agents_files.agent_id > agents.id
|
||||
|
|
|
|||
|
|
@ -308,11 +308,6 @@
|
|||
"count": 1
|
||||
}
|
||||
},
|
||||
"src/libs/next/proxy/define-config.ts": {
|
||||
"no-console": {
|
||||
"count": 1
|
||||
}
|
||||
},
|
||||
"src/libs/observability/traceparent.test.ts": {
|
||||
"import/first": {
|
||||
"count": 1
|
||||
|
|
@ -349,9 +344,14 @@
|
|||
"count": 1
|
||||
}
|
||||
},
|
||||
"src/server/modules/Mecha/ContextEngineering/index.ts": {
|
||||
"sort-keys-fix/sort-keys-fix": {
|
||||
"count": 1
|
||||
"src/server/manifest.ts": {
|
||||
"object-shorthand": {
|
||||
"count": 3
|
||||
}
|
||||
},
|
||||
"src/server/modules/KeyVaultsEncrypt/index.ts": {
|
||||
"object-shorthand": {
|
||||
"count": 2
|
||||
}
|
||||
},
|
||||
"src/server/modules/ModelRuntime/apiKeyManager.test.ts": {
|
||||
|
|
|
|||
|
|
@ -397,6 +397,7 @@
|
|||
"tab.chat": "Chat",
|
||||
"tab.community": "Community",
|
||||
"tab.discover": "Discover",
|
||||
"tab.eval": "Eval Lab",
|
||||
"tab.files": "Files",
|
||||
"tab.home": "Home",
|
||||
"tab.knowledgeBase": "Library",
|
||||
|
|
|
|||
316
locales/en-US/eval.json
Normal file
316
locales/en-US/eval.json
Normal file
|
|
@ -0,0 +1,316 @@
|
|||
{
|
||||
"benchmark.actions.delete": "Delete Benchmark",
|
||||
"benchmark.actions.delete.confirm": "Are you sure you want to delete this benchmark? Related datasets and evaluation records will also be deleted.",
|
||||
"benchmark.actions.edit": "Edit Benchmark",
|
||||
"benchmark.actions.export": "Export",
|
||||
"benchmark.card.bestScore": "Best",
|
||||
"benchmark.card.caseCount": "{{count}} cases",
|
||||
"benchmark.card.datasetCount": "{{count}} datasets",
|
||||
"benchmark.card.empty": "No evaluations yet",
|
||||
"benchmark.card.emptyHint": "Create a new evaluation from the benchmark detail page",
|
||||
"benchmark.card.importDataset": "Import Dataset",
|
||||
"benchmark.card.noDataset": "No datasets yet",
|
||||
"benchmark.card.noDatasetHint": "Import a dataset to start evaluating",
|
||||
"benchmark.card.noRecentRuns": "No recent evaluations to display",
|
||||
"benchmark.card.recentRuns": "Recent Evaluations",
|
||||
"benchmark.card.runCount": "{{count}} evals",
|
||||
"benchmark.card.startFirst": "Start First Evaluation",
|
||||
"benchmark.card.viewAll": "View all {{count}}",
|
||||
"benchmark.create.confirm": "Create",
|
||||
"benchmark.create.description.label": "Description",
|
||||
"benchmark.create.description.placeholder": "Benchmark description (optional)",
|
||||
"benchmark.create.error": "Failed to create benchmark",
|
||||
"benchmark.create.identifier.label": "Identifier",
|
||||
"benchmark.create.identifier.placeholder": "benchmark-identifier",
|
||||
"benchmark.create.identifierRequired": "Please enter an identifier",
|
||||
"benchmark.create.name.label": "Name",
|
||||
"benchmark.create.name.placeholder": "Enter benchmark name",
|
||||
"benchmark.create.nameRequired": "Please enter a benchmark name",
|
||||
"benchmark.create.success": "Benchmark created successfully",
|
||||
"benchmark.create.tags.label": "Tags",
|
||||
"benchmark.create.tags.placeholder": "Add tags, separate with comma or space",
|
||||
"benchmark.create.title": "Create Benchmark",
|
||||
"benchmark.detail.backToOverview": "Back to Overview",
|
||||
"benchmark.detail.datasetCount": "{{count}} dataset{{count, plural, one {} other {s}}} in this benchmark",
|
||||
"benchmark.detail.runCount": "{{count}} evaluation run{{count, plural, one {} other {s}}} on this benchmark",
|
||||
"benchmark.detail.stats.addFirstDataset": "Click to add first dataset",
|
||||
"benchmark.detail.stats.avgCost": "Avg Cost",
|
||||
"benchmark.detail.stats.avgDuration": "Avg Duration",
|
||||
"benchmark.detail.stats.basedOnLastNRuns": "Based on last {{count}} runs",
|
||||
"benchmark.detail.stats.bestPerformance": "Best performance by {{agent}} with {{passRate}}% pass rate",
|
||||
"benchmark.detail.stats.bestScore": "Best Score",
|
||||
"benchmark.detail.stats.cases": "Cases",
|
||||
"benchmark.detail.stats.dataScale": "Data Scale",
|
||||
"benchmark.detail.stats.datasets": "Datasets",
|
||||
"benchmark.detail.stats.needSetup": "Setup Required",
|
||||
"benchmark.detail.stats.noEvalRecord": "No evaluation records yet",
|
||||
"benchmark.detail.stats.perRun": "/ Run",
|
||||
"benchmark.detail.stats.runs": "Runs",
|
||||
"benchmark.detail.stats.tags": "Tags",
|
||||
"benchmark.detail.stats.topAgents": "Top Agents",
|
||||
"benchmark.detail.stats.totalCases": "Total Cases",
|
||||
"benchmark.detail.stats.waiting": "Waiting...",
|
||||
"benchmark.detail.tabs.data": "Data",
|
||||
"benchmark.detail.tabs.datasets": "Datasets",
|
||||
"benchmark.detail.tabs.runs": "Evaluations",
|
||||
"benchmark.edit.confirm": "Save",
|
||||
"benchmark.edit.error": "Failed to update benchmark",
|
||||
"benchmark.edit.success": "Benchmark updated successfully",
|
||||
"benchmark.edit.title": "Edit Benchmark",
|
||||
"benchmark.empty": "No benchmarks yet. Create one to get started.",
|
||||
"caseDetail.actual": "Actual Output",
|
||||
"caseDetail.chatArea.title": "Conversation",
|
||||
"caseDetail.completionReason": "Status",
|
||||
"caseDetail.cost": "Cost",
|
||||
"caseDetail.difficulty": "Difficulty",
|
||||
"caseDetail.duration": "Duration",
|
||||
"caseDetail.expected": "Expected Output",
|
||||
"caseDetail.failureReason": "Failure Reason",
|
||||
"caseDetail.input": "Input",
|
||||
"caseDetail.judgeComment": "Judge Comment",
|
||||
"caseDetail.resources": "Resources",
|
||||
"caseDetail.score": "Score",
|
||||
"caseDetail.section.runtime": "Runtime",
|
||||
"caseDetail.section.scoring": "Scoring Details",
|
||||
"caseDetail.section.testCase": "Test Case",
|
||||
"caseDetail.steps": "Steps",
|
||||
"caseDetail.threads.attempt": "Trajectory #{{number}}",
|
||||
"caseDetail.tokens": "Token Usage",
|
||||
"common.cancel": "Cancel",
|
||||
"common.create": "Create",
|
||||
"common.delete": "Delete",
|
||||
"common.edit": "Edit",
|
||||
"common.later": "Later",
|
||||
"common.next": "Next",
|
||||
"common.update": "Update",
|
||||
"dataset.actions.addDataset": "Add Dataset",
|
||||
"dataset.actions.import": "Import Data",
|
||||
"dataset.actions.importDataset": "Import Dataset",
|
||||
"dataset.create.description.label": "Description",
|
||||
"dataset.create.description.placeholder": "Dataset description (optional)",
|
||||
"dataset.create.error": "Failed to create dataset",
|
||||
"dataset.create.identifier.label": "Identifier",
|
||||
"dataset.create.identifier.placeholder": "dataset-identifier",
|
||||
"dataset.create.identifierRequired": "Please enter an identifier",
|
||||
"dataset.create.importNow": "Would you like to import data now?",
|
||||
"dataset.create.name.label": "Dataset Name",
|
||||
"dataset.create.name.placeholder": "Enter dataset name",
|
||||
"dataset.create.nameRequired": "Please enter a dataset name",
|
||||
"dataset.create.preset.label": "Dataset Preset",
|
||||
"dataset.create.success": "Dataset created successfully",
|
||||
"dataset.create.successTitle": "Dataset Created",
|
||||
"dataset.create.title": "Create Dataset",
|
||||
"dataset.delete.confirm": "Are you sure you want to delete this dataset? All test cases in it will also be deleted.",
|
||||
"dataset.delete.error": "Failed to delete dataset",
|
||||
"dataset.delete.success": "Dataset deleted successfully",
|
||||
"dataset.detail.addRun": "New Evaluation",
|
||||
"dataset.detail.backToBenchmark": "Back to Benchmark",
|
||||
"dataset.detail.caseCount": "{{count}} test case{{count, plural, one {} other {s}}}",
|
||||
"dataset.detail.relatedRuns": "Related Evaluations ({{count}})",
|
||||
"dataset.detail.testCases": "Test Cases",
|
||||
"dataset.detail.viewDetail": "View Details",
|
||||
"dataset.edit.error": "Failed to update dataset",
|
||||
"dataset.edit.success": "Dataset updated successfully",
|
||||
"dataset.edit.title": "Edit Dataset",
|
||||
"dataset.empty": "No datasets",
|
||||
"dataset.empty.description": "Import a dataset to start building this benchmark",
|
||||
"dataset.empty.title": "No datasets yet",
|
||||
"dataset.evalMode.hint": "Default eval mode for the dataset, can be overridden at test case level",
|
||||
"dataset.import.category": "Category",
|
||||
"dataset.import.categoryDesc": "Classification label for grouping",
|
||||
"dataset.import.choices": "Choices",
|
||||
"dataset.import.choicesDesc": "Multiple-choice options",
|
||||
"dataset.import.confirm": "Import",
|
||||
"dataset.import.error": "Failed to import dataset",
|
||||
"dataset.import.expected": "Expected Answer",
|
||||
"dataset.import.expectedDelimiter": "Answer Delimiter",
|
||||
"dataset.import.expectedDelimiter.desc": "Answer delimiter",
|
||||
"dataset.import.expectedDelimiter.placeholder": "e.g. | or ,",
|
||||
"dataset.import.expectedDesc": "Correct answer to compare against",
|
||||
"dataset.import.fieldMapping": "Field Mapping",
|
||||
"dataset.import.fieldMapping.desc": "\"Input\" column is required",
|
||||
"dataset.import.hideSkipped": "Hide skipped columns",
|
||||
"dataset.import.ignore": "Skip",
|
||||
"dataset.import.ignoreDesc": "Do not import this column",
|
||||
"dataset.import.input": "Input",
|
||||
"dataset.import.inputDesc": "Question or prompt sent to model",
|
||||
"dataset.import.metadata": "Metadata",
|
||||
"dataset.import.metadataDesc": "Extra info, stored as-is",
|
||||
"dataset.import.next": "Next",
|
||||
"dataset.import.parseError": "Failed to parse file",
|
||||
"dataset.import.parsing": "Parsing file...",
|
||||
"dataset.import.prev": "Previous",
|
||||
"dataset.import.preview": "Data Preview",
|
||||
"dataset.import.preview.desc": "Confirm the mapping is correct, then import.",
|
||||
"dataset.import.preview.rows": "{{count}} rows total",
|
||||
"dataset.import.sortOrder": "Item Number",
|
||||
"dataset.import.sortOrderDesc": "Question/item ID for reference",
|
||||
"dataset.import.step.mapping": "Map Fields",
|
||||
"dataset.import.step.preview": "Preview",
|
||||
"dataset.import.step.upload": "Upload File",
|
||||
"dataset.import.success": "Successfully imported {{count}} test cases",
|
||||
"dataset.import.title": "Import Dataset",
|
||||
"dataset.import.upload.hint": "Supports CSV, XLSX, JSON, JSONL",
|
||||
"dataset.import.upload.text": "Click or drag file here to upload",
|
||||
"dataset.import.uploading": "Uploading...",
|
||||
"dataset.switchDataset": "Switch Dataset",
|
||||
"difficulty.easy": "Easy",
|
||||
"difficulty.hard": "Hard",
|
||||
"difficulty.medium": "Medium",
|
||||
"evalMode.contains": "Contains Match",
|
||||
"evalMode.contains.desc": "Output must contain the expected text",
|
||||
"evalMode.equals": "Exact Match",
|
||||
"evalMode.equals.desc": "Output must be exactly the same as expected",
|
||||
"evalMode.label": "Eval Mode",
|
||||
"evalMode.llm-rubric": "LLM Judge",
|
||||
"evalMode.llm-rubric.desc": "Use LLM to evaluate output quality",
|
||||
"evalMode.placeholder": "Select eval mode",
|
||||
"evalMode.prompt.label": "Judge Prompt",
|
||||
"evalMode.prompt.placeholder": "Enter the evaluation criteria or prompt for LLM judge",
|
||||
"evalMode.rubric": "Rubric Scoring",
|
||||
"evalMode.rubric.desc": "Score output using benchmark rubrics with weighted criteria",
|
||||
"overview.createBenchmark": "Create Benchmark",
|
||||
"overview.importDataset": "Import Dataset",
|
||||
"overview.subtitle": "Benchmark and evaluate your AI agents across datasets",
|
||||
"overview.title": "Evaluation Lab",
|
||||
"run.actions.abort": "Abort",
|
||||
"run.actions.abort.confirm": "Are you sure you want to abort this evaluation?",
|
||||
"run.actions.create": "New Evaluation",
|
||||
"run.actions.delete": "Delete",
|
||||
"run.actions.delete.confirm": "Are you sure you want to delete this evaluation?",
|
||||
"run.actions.edit": "Edit",
|
||||
"run.actions.retryCase": "Retry",
|
||||
"run.actions.retryErrors": "Retry Errors",
|
||||
"run.actions.retryErrors.confirm": "This will re-run all error and timeout cases. Passed and failed cases will not be affected.",
|
||||
"run.actions.run": "Run",
|
||||
"run.actions.start": "Start",
|
||||
"run.actions.start.confirm": "Are you sure you want to start this evaluation?",
|
||||
"run.chart.duration": "Duration (s)",
|
||||
"run.chart.error": "Error",
|
||||
"run.chart.fail": "Fail",
|
||||
"run.chart.latencyDistribution": "Latency Distribution",
|
||||
"run.chart.latencyTokenDistribution": "Latency / Token Distribution",
|
||||
"run.chart.pass": "Pass",
|
||||
"run.chart.passFailError": "Pass / Fail / Error",
|
||||
"run.chart.tokens": "Tokens",
|
||||
"run.config.agentId": "Agent",
|
||||
"run.config.concurrency": "Concurrency",
|
||||
"run.config.judgeModel": "Judge Model",
|
||||
"run.config.k": "Executions (K)",
|
||||
"run.config.k.hint": "Run each test case {{k}} times for pass@{{k}}/pass^{{k}} metrics",
|
||||
"run.config.maxSteps": "Max Steps",
|
||||
"run.config.maxSteps.hint": "Each LLM call or tool call by the agent counts as 1 step",
|
||||
"run.config.model": "Model",
|
||||
"run.config.temperature": "Temperature",
|
||||
"run.config.timeout": "Timeout",
|
||||
"run.config.timeout.unit": "min",
|
||||
"run.create.advanced": "Advanced Settings",
|
||||
"run.create.agent": "Agent",
|
||||
"run.create.agent.placeholder": "Select an agent",
|
||||
"run.create.agent.required": "Please select an agent",
|
||||
"run.create.caseCount": "{{count}} cases",
|
||||
"run.create.confirm": "Create & Start",
|
||||
"run.create.createOnly": "Create",
|
||||
"run.create.dataset": "Dataset",
|
||||
"run.create.dataset.placeholder": "Select a dataset",
|
||||
"run.create.dataset.required": "Please select a dataset",
|
||||
"run.create.name": "Run Name",
|
||||
"run.create.name.placeholder": "Enter a name for this run",
|
||||
"run.create.name.required": "Please enter a run name",
|
||||
"run.create.name.useTimestamp": "Use current time as name",
|
||||
"run.create.openAgent": "Open agent in new window",
|
||||
"run.create.title": "New Evaluation",
|
||||
"run.create.titleWithDataset": "New Evaluation on \"{{dataset}}\"",
|
||||
"run.detail.agent": "Agent",
|
||||
"run.detail.agent.none": "Not specified",
|
||||
"run.detail.agent.unnamed": "Unnamed Agent",
|
||||
"run.detail.backToBenchmark": "Back to Benchmark",
|
||||
"run.detail.caseResults": "Eval Details",
|
||||
"run.detail.config": "Evaluation Config",
|
||||
"run.detail.configSnapshot": "Configuration Snapshot",
|
||||
"run.detail.dataset": "Dataset",
|
||||
"run.detail.model": "Model",
|
||||
"run.detail.overview": "Overview",
|
||||
"run.detail.progress": "Progress",
|
||||
"run.detail.progressCases": "cases",
|
||||
"run.detail.report": "Evaluation Summary",
|
||||
"run.edit.error": "Failed to update evaluation",
|
||||
"run.edit.success": "Evaluation updated successfully",
|
||||
"run.edit.title": "Edit Evaluation",
|
||||
"run.empty.description": "Start your first evaluation run on this dataset",
|
||||
"run.empty.descriptionBenchmark": "Start your first evaluation run on this benchmark",
|
||||
"run.empty.title": "No evaluations yet",
|
||||
"run.filter.active": "Active",
|
||||
"run.filter.empty": "No evaluations match the current filter.",
|
||||
"run.idle.hint": "Click Start to begin evaluation",
|
||||
"run.metrics.avgScore": "Avg Score",
|
||||
"run.metrics.cost": "Cost",
|
||||
"run.metrics.duration": "Duration",
|
||||
"run.metrics.errorCases": "Error",
|
||||
"run.metrics.evaluated": "{{count}} evaluated",
|
||||
"run.metrics.passRate": "Pass Rate",
|
||||
"run.metrics.perCase": "/ case",
|
||||
"run.metrics.tokens": "Tokens",
|
||||
"run.metrics.totalDuration": "Cumulative",
|
||||
"run.pending.hint": "Evaluation is queued, waiting to start...",
|
||||
"run.running.hint": "Evaluation is running, results will appear shortly...",
|
||||
"run.status.aborted": "Aborted",
|
||||
"run.status.completed": "Completed",
|
||||
"run.status.error": "Run Error",
|
||||
"run.status.failed": "Failed",
|
||||
"run.status.idle": "Idle",
|
||||
"run.status.pending": "Pending",
|
||||
"run.status.running": "Running",
|
||||
"run.status.timeout": "Timeout",
|
||||
"sidebar.benchmarks": "Benchmarks",
|
||||
"sidebar.dashboard": "Dashboard",
|
||||
"sidebar.datasets": "Datasets",
|
||||
"sidebar.runs": "Runs",
|
||||
"table.columns.avgCost": "Avg Cost",
|
||||
"table.columns.category": "Category",
|
||||
"table.columns.cost": "Cost",
|
||||
"table.columns.difficulty": "Difficulty",
|
||||
"table.columns.duration": "Duration",
|
||||
"table.columns.evalMode": "Eval Mode",
|
||||
"table.columns.expected": "Expected Answer",
|
||||
"table.columns.input": "Input",
|
||||
"table.columns.score": "Score",
|
||||
"table.columns.status": "Status",
|
||||
"table.columns.steps": "Steps",
|
||||
"table.columns.tags": "Tags",
|
||||
"table.columns.tokens": "Tokens",
|
||||
"table.columns.totalCost": "Total Cost",
|
||||
"table.filter.all": "All",
|
||||
"table.filter.error": "Run Error",
|
||||
"table.filter.failed": "Failed",
|
||||
"table.filter.passed": "Passed",
|
||||
"table.filter.running": "Running",
|
||||
"table.search.placeholder": "Search cases...",
|
||||
"table.total": "Total {{count}}",
|
||||
"testCase.actions.add": "Add Test Case",
|
||||
"testCase.actions.import": "Import Test Cases",
|
||||
"testCase.create.advanced": "More Options",
|
||||
"testCase.create.difficulty.label": "Difficulty",
|
||||
"testCase.create.error": "Failed to add test case",
|
||||
"testCase.create.expected.label": "Expected Output",
|
||||
"testCase.create.expected.placeholder": "Enter the expected answer",
|
||||
"testCase.create.expected.required": "Please enter the expected output",
|
||||
"testCase.create.input.label": "Input",
|
||||
"testCase.create.input.placeholder": "Enter the test case input or question",
|
||||
"testCase.create.success": "Test case added successfully",
|
||||
"testCase.create.tags.label": "Tags",
|
||||
"testCase.create.tags.placeholder": "Comma-separated tags (optional)",
|
||||
"testCase.create.title": "Add Test Case",
|
||||
"testCase.delete.confirm": "Are you sure you want to delete this test case?",
|
||||
"testCase.delete.error": "Failed to delete test case",
|
||||
"testCase.delete.success": "Test case deleted",
|
||||
"testCase.edit.error": "Failed to update test case",
|
||||
"testCase.edit.success": "Test case updated successfully",
|
||||
"testCase.edit.title": "Edit Test Case",
|
||||
"testCase.empty.description": "Import or manually add test cases to this dataset",
|
||||
"testCase.empty.title": "No test cases yet",
|
||||
"testCase.preview.expected": "Expected",
|
||||
"testCase.preview.input": "Input",
|
||||
"testCase.preview.title": "Test Case Preview",
|
||||
"testCase.search.placeholder": "Search cases..."
|
||||
}
|
||||
|
|
@ -397,6 +397,7 @@
|
|||
"tab.chat": "会话",
|
||||
"tab.community": "社区",
|
||||
"tab.discover": "发现",
|
||||
"tab.eval": "评测实验室",
|
||||
"tab.files": "文件",
|
||||
"tab.home": "首页",
|
||||
"tab.knowledgeBase": "资源库",
|
||||
|
|
|
|||
316
locales/zh-CN/eval.json
Normal file
316
locales/zh-CN/eval.json
Normal file
|
|
@ -0,0 +1,316 @@
|
|||
{
|
||||
"benchmark.actions.delete": "删除基准",
|
||||
"benchmark.actions.delete.confirm": "确定要删除此基准吗?相关数据集和评测记录也会被删除。",
|
||||
"benchmark.actions.edit": "编辑基准",
|
||||
"benchmark.actions.export": "导出",
|
||||
"benchmark.card.bestScore": "最佳",
|
||||
"benchmark.card.caseCount": "{{count}} 个用例",
|
||||
"benchmark.card.datasetCount": "{{count}} 个数据集",
|
||||
"benchmark.card.empty": "暂无评测记录",
|
||||
"benchmark.card.emptyHint": "前往基准详情页创建新的评测",
|
||||
"benchmark.card.importDataset": "导入数据集",
|
||||
"benchmark.card.noDataset": "暂无数据集",
|
||||
"benchmark.card.noDatasetHint": "导入数据集以开始评测",
|
||||
"benchmark.card.noRecentRuns": "暂无最近的评测记录",
|
||||
"benchmark.card.recentRuns": "最近评测",
|
||||
"benchmark.card.runCount": "{{count}} 次评测",
|
||||
"benchmark.card.startFirst": "开始首次评测",
|
||||
"benchmark.card.viewAll": "查看全部 {{count}} 条",
|
||||
"benchmark.create.confirm": "创建",
|
||||
"benchmark.create.description.label": "描述",
|
||||
"benchmark.create.description.placeholder": "基准描述(选填)",
|
||||
"benchmark.create.error": "创建基准失败",
|
||||
"benchmark.create.identifier.label": "标识符",
|
||||
"benchmark.create.identifier.placeholder": "benchmark-identifier",
|
||||
"benchmark.create.identifierRequired": "请输入标识符",
|
||||
"benchmark.create.name.label": "名称",
|
||||
"benchmark.create.name.placeholder": "输入基准名称",
|
||||
"benchmark.create.nameRequired": "请输入基准名称",
|
||||
"benchmark.create.success": "基准创建成功",
|
||||
"benchmark.create.tags.label": "标签",
|
||||
"benchmark.create.tags.placeholder": "添加标签,用逗号或空格分隔",
|
||||
"benchmark.create.title": "创建基准",
|
||||
"benchmark.detail.backToOverview": "返回总览",
|
||||
"benchmark.detail.datasetCount": "此基准包含 {{count}} 个数据集",
|
||||
"benchmark.detail.runCount": "此基准有 {{count}} 次评测",
|
||||
"benchmark.detail.stats.addFirstDataset": "点击添加首个数据集",
|
||||
"benchmark.detail.stats.avgCost": "平均成本",
|
||||
"benchmark.detail.stats.avgDuration": "平均耗时",
|
||||
"benchmark.detail.stats.basedOnLastNRuns": "基于最近 {{count}} 次评测",
|
||||
"benchmark.detail.stats.bestPerformance": "目前最佳表现由 {{agent}} 达成,通过率 {{passRate}}%",
|
||||
"benchmark.detail.stats.bestScore": "最佳分数",
|
||||
"benchmark.detail.stats.cases": "用例",
|
||||
"benchmark.detail.stats.dataScale": "数据规模",
|
||||
"benchmark.detail.stats.datasets": "数据集",
|
||||
"benchmark.detail.stats.needSetup": "需配置",
|
||||
"benchmark.detail.stats.noEvalRecord": "尚无评测记录",
|
||||
"benchmark.detail.stats.perRun": "/ 次",
|
||||
"benchmark.detail.stats.runs": "评测",
|
||||
"benchmark.detail.stats.tags": "标签",
|
||||
"benchmark.detail.stats.topAgents": "Top Agents",
|
||||
"benchmark.detail.stats.totalCases": "总用例数",
|
||||
"benchmark.detail.stats.waiting": "Waiting...",
|
||||
"benchmark.detail.tabs.data": "数据",
|
||||
"benchmark.detail.tabs.datasets": "数据集",
|
||||
"benchmark.detail.tabs.runs": "评测",
|
||||
"benchmark.edit.confirm": "保存",
|
||||
"benchmark.edit.error": "更新基准失败",
|
||||
"benchmark.edit.success": "基准更新成功",
|
||||
"benchmark.edit.title": "编辑基准",
|
||||
"benchmark.empty": "暂无基准,请先创建一个。",
|
||||
"caseDetail.actual": "实际输出",
|
||||
"caseDetail.chatArea.title": "对话记录",
|
||||
"caseDetail.completionReason": "状态",
|
||||
"caseDetail.cost": "费用",
|
||||
"caseDetail.difficulty": "难度",
|
||||
"caseDetail.duration": "耗时",
|
||||
"caseDetail.expected": "期望输出",
|
||||
"caseDetail.failureReason": "失败原因",
|
||||
"caseDetail.input": "输入",
|
||||
"caseDetail.judgeComment": "裁判评语",
|
||||
"caseDetail.resources": "资源",
|
||||
"caseDetail.score": "评分",
|
||||
"caseDetail.section.runtime": "执行信息",
|
||||
"caseDetail.section.scoring": "评分详情",
|
||||
"caseDetail.section.testCase": "测试用例",
|
||||
"caseDetail.steps": "执行步数",
|
||||
"caseDetail.threads.attempt": "运行轨迹 #{{number}}",
|
||||
"caseDetail.tokens": "Token 用量",
|
||||
"common.cancel": "取消",
|
||||
"common.create": "创建",
|
||||
"common.delete": "删除",
|
||||
"common.edit": "编辑",
|
||||
"common.later": "稍后",
|
||||
"common.next": "下一步",
|
||||
"common.update": "更新",
|
||||
"dataset.actions.addDataset": "添加数据集",
|
||||
"dataset.actions.import": "导入数据",
|
||||
"dataset.actions.importDataset": "导入数据集",
|
||||
"dataset.create.description.label": "描述",
|
||||
"dataset.create.description.placeholder": "数据集描述(选填)",
|
||||
"dataset.create.error": "创建数据集失败",
|
||||
"dataset.create.identifier.label": "标识符",
|
||||
"dataset.create.identifier.placeholder": "dataset-identifier",
|
||||
"dataset.create.identifierRequired": "请输入标识符",
|
||||
"dataset.create.importNow": "是否立即导入数据?",
|
||||
"dataset.create.name.label": "数据集名称",
|
||||
"dataset.create.name.placeholder": "输入数据集名称",
|
||||
"dataset.create.nameRequired": "请输入数据集名称",
|
||||
"dataset.create.preset.label": "数据集预设",
|
||||
"dataset.create.success": "数据集创建成功",
|
||||
"dataset.create.successTitle": "数据集已创建",
|
||||
"dataset.create.title": "创建数据集",
|
||||
"dataset.delete.confirm": "确定要删除此数据集吗?其中的所有数据用例也会被删除。",
|
||||
"dataset.delete.error": "删除数据集失败",
|
||||
"dataset.delete.success": "数据集删除成功",
|
||||
"dataset.detail.addRun": "新建评测",
|
||||
"dataset.detail.backToBenchmark": "返回基准测试",
|
||||
"dataset.detail.caseCount": "{{count}} 个测试用例",
|
||||
"dataset.detail.relatedRuns": "关联评测 ({{count}})",
|
||||
"dataset.detail.testCases": "测试用例",
|
||||
"dataset.detail.viewDetail": "查看详情",
|
||||
"dataset.edit.error": "更新数据集失败",
|
||||
"dataset.edit.success": "数据集更新成功",
|
||||
"dataset.edit.title": "编辑数据集",
|
||||
"dataset.empty": "暂无数据集",
|
||||
"dataset.empty.description": "导入数据集以开始构建此基准",
|
||||
"dataset.empty.title": "暂无数据集",
|
||||
"dataset.evalMode.hint": "数据集默认评估模式,可被用例级别覆盖",
|
||||
"dataset.import.category": "分类",
|
||||
"dataset.import.categoryDesc": "用于分组的分类标签",
|
||||
"dataset.import.choices": "选项",
|
||||
"dataset.import.choicesDesc": "多选选项",
|
||||
"dataset.import.confirm": "导入",
|
||||
"dataset.import.error": "导入数据集失败",
|
||||
"dataset.import.expected": "期望答案",
|
||||
"dataset.import.expectedDelimiter": "答案分隔符",
|
||||
"dataset.import.expectedDelimiter.desc": "答案分隔符",
|
||||
"dataset.import.expectedDelimiter.placeholder": "如 | 或 ,",
|
||||
"dataset.import.expectedDesc": "用于对比的正确答案",
|
||||
"dataset.import.fieldMapping": "字段映射",
|
||||
"dataset.import.fieldMapping.desc": "必须指定「输入」列",
|
||||
"dataset.import.hideSkipped": "隐藏跳过的列",
|
||||
"dataset.import.ignore": "跳过",
|
||||
"dataset.import.ignoreDesc": "不导入此列",
|
||||
"dataset.import.input": "输入",
|
||||
"dataset.import.inputDesc": "发送给模型的问题或提示",
|
||||
"dataset.import.metadata": "元数据",
|
||||
"dataset.import.metadataDesc": "额外信息,原样存储",
|
||||
"dataset.import.next": "下一步",
|
||||
"dataset.import.parseError": "文件解析失败",
|
||||
"dataset.import.parsing": "正在解析文件...",
|
||||
"dataset.import.prev": "上一步",
|
||||
"dataset.import.preview": "数据预览",
|
||||
"dataset.import.preview.desc": "确认映射正确后导入。",
|
||||
"dataset.import.preview.rows": "共 {{count}} 行",
|
||||
"dataset.import.sortOrder": "题目编号",
|
||||
"dataset.import.sortOrderDesc": "题目/用例的编号,便于沟通引用",
|
||||
"dataset.import.step.mapping": "映射字段",
|
||||
"dataset.import.step.preview": "预览",
|
||||
"dataset.import.step.upload": "上传文件",
|
||||
"dataset.import.success": "成功导入 {{count}} 个数据用例",
|
||||
"dataset.import.title": "导入数据集",
|
||||
"dataset.import.upload.hint": "支持 CSV、XLSX、JSON、JSONL",
|
||||
"dataset.import.upload.text": "点击或拖拽文件到此处",
|
||||
"dataset.import.uploading": "上传中...",
|
||||
"dataset.switchDataset": "切换数据集",
|
||||
"difficulty.easy": "简单",
|
||||
"difficulty.hard": "困难",
|
||||
"difficulty.medium": "中等",
|
||||
"evalMode.contains": "包含匹配",
|
||||
"evalMode.contains.desc": "输出中必须包含期望的文本",
|
||||
"evalMode.equals": "精确匹配",
|
||||
"evalMode.equals.desc": "输出必须与期望内容完全一致",
|
||||
"evalMode.label": "评估模式",
|
||||
"evalMode.llm-rubric": "LLM 评判",
|
||||
"evalMode.llm-rubric.desc": "使用 LLM 评估输出质量",
|
||||
"evalMode.placeholder": "选择评估模式",
|
||||
"evalMode.prompt.label": "评判提示词",
|
||||
"evalMode.prompt.placeholder": "输入 LLM 评判的评估标准或提示词",
|
||||
"evalMode.rubric": "混合指标评分",
|
||||
"evalMode.rubric.desc": "使用基准的加权指标进行混合评分",
|
||||
"overview.createBenchmark": "创建基准",
|
||||
"overview.importDataset": "导入数据集",
|
||||
"overview.subtitle": "对你的 AI 助手进行跨数据集的基准测试与评估",
|
||||
"overview.title": "评测实验室",
|
||||
"run.actions.abort": "终止",
|
||||
"run.actions.abort.confirm": "确定要终止此评测吗?",
|
||||
"run.actions.create": "新建评测",
|
||||
"run.actions.delete": "删除",
|
||||
"run.actions.delete.confirm": "确定要删除此评测吗?",
|
||||
"run.actions.edit": "编辑",
|
||||
"run.actions.retryCase": "重试",
|
||||
"run.actions.retryErrors": "重试错误用例",
|
||||
"run.actions.retryErrors.confirm": "将重新运行所有错误和超时的用例。已通过和未通过的用例不受影响。",
|
||||
"run.actions.run": "执行",
|
||||
"run.actions.start": "启动",
|
||||
"run.actions.start.confirm": "确定要启动此评测吗?",
|
||||
"run.chart.duration": "耗时 (s)",
|
||||
"run.chart.error": "出错",
|
||||
"run.chart.fail": "失败",
|
||||
"run.chart.latencyDistribution": "耗时分布",
|
||||
"run.chart.latencyTokenDistribution": "耗时 / Token 分布",
|
||||
"run.chart.pass": "通过",
|
||||
"run.chart.passFailError": "通过 / 失败 / 出错",
|
||||
"run.chart.tokens": "Tokens",
|
||||
"run.config.agentId": "执行 Agent",
|
||||
"run.config.concurrency": "并发数",
|
||||
"run.config.judgeModel": "裁判模型",
|
||||
"run.config.k": "执行次数 (K)",
|
||||
"run.config.k.hint": "每个测试用例执行 {{k}} 次,用于 pass@{{k}}/pass^{{k}} 指标",
|
||||
"run.config.maxSteps": "最大步数",
|
||||
"run.config.maxSteps.hint": "Agent 每执行一次 LLM 调用或工具调用都算 1 步",
|
||||
"run.config.model": "模型",
|
||||
"run.config.temperature": "温度",
|
||||
"run.config.timeout": "超时时间",
|
||||
"run.config.timeout.unit": "分钟",
|
||||
"run.create.advanced": "高级设置",
|
||||
"run.create.agent": "执行 Agent",
|
||||
"run.create.agent.placeholder": "选择助手",
|
||||
"run.create.agent.required": "请选择一个助手",
|
||||
"run.create.caseCount": "{{count}} 个用例",
|
||||
"run.create.confirm": "创建并执行",
|
||||
"run.create.createOnly": "创建",
|
||||
"run.create.dataset": "数据集",
|
||||
"run.create.dataset.placeholder": "选择数据集",
|
||||
"run.create.dataset.required": "请选择数据集",
|
||||
"run.create.name": "评测名称",
|
||||
"run.create.name.placeholder": "输入评测名称",
|
||||
"run.create.name.required": "请输入评测名称",
|
||||
"run.create.name.useTimestamp": "使用当前时间作为名称",
|
||||
"run.create.openAgent": "在新窗口中打开助手",
|
||||
"run.create.title": "新建评测",
|
||||
"run.create.titleWithDataset": "基于 {{dataset}} 数据集新建评测",
|
||||
"run.detail.agent": "执行 Agent",
|
||||
"run.detail.agent.none": "未指定",
|
||||
"run.detail.agent.unnamed": "未命名助手",
|
||||
"run.detail.backToBenchmark": "返回基准测试",
|
||||
"run.detail.caseResults": "评测明细",
|
||||
"run.detail.config": "评测配置",
|
||||
"run.detail.configSnapshot": "配置快照",
|
||||
"run.detail.dataset": "数据集",
|
||||
"run.detail.model": "模型",
|
||||
"run.detail.overview": "概览",
|
||||
"run.detail.progress": "进度",
|
||||
"run.detail.progressCases": "个用例",
|
||||
"run.detail.report": "评测概要",
|
||||
"run.edit.error": "更新评测失败",
|
||||
"run.edit.success": "评测更新成功",
|
||||
"run.edit.title": "编辑评测",
|
||||
"run.empty.description": "在此数据集上开始你的首次评测",
|
||||
"run.empty.descriptionBenchmark": "在此基准上开始你的首次评测",
|
||||
"run.empty.title": "暂无评测",
|
||||
"run.filter.active": "进行中",
|
||||
"run.filter.empty": "没有符合当前筛选条件的评测。",
|
||||
"run.idle.hint": "点击开始以启动评测",
|
||||
"run.metrics.avgScore": "平均分",
|
||||
"run.metrics.cost": "费用",
|
||||
"run.metrics.duration": "耗时",
|
||||
"run.metrics.errorCases": "出错",
|
||||
"run.metrics.evaluated": "{{count}} 个已评测",
|
||||
"run.metrics.passRate": "通过率",
|
||||
"run.metrics.perCase": "/用例",
|
||||
"run.metrics.tokens": "Tokens",
|
||||
"run.metrics.totalDuration": "累计",
|
||||
"run.pending.hint": "评测已进入运行队列,等待启动中...",
|
||||
"run.running.hint": "评测进行中,结果即将呈现...",
|
||||
"run.status.aborted": "已终止",
|
||||
"run.status.completed": "已完成",
|
||||
"run.status.error": "运行出错",
|
||||
"run.status.failed": "失败",
|
||||
"run.status.idle": "待开始",
|
||||
"run.status.pending": "等待中",
|
||||
"run.status.running": "进行中",
|
||||
"run.status.timeout": "超时",
|
||||
"sidebar.benchmarks": "基准",
|
||||
"sidebar.dashboard": "总览",
|
||||
"sidebar.datasets": "数据集",
|
||||
"sidebar.runs": "评测",
|
||||
"table.columns.avgCost": "平均成本",
|
||||
"table.columns.category": "分类",
|
||||
"table.columns.cost": "成本",
|
||||
"table.columns.difficulty": "难度",
|
||||
"table.columns.duration": "耗时",
|
||||
"table.columns.evalMode": "评估方式",
|
||||
"table.columns.expected": "期望答案",
|
||||
"table.columns.input": "输入",
|
||||
"table.columns.score": "评分",
|
||||
"table.columns.status": "状态",
|
||||
"table.columns.steps": "步数",
|
||||
"table.columns.tags": "标签",
|
||||
"table.columns.tokens": "Tokens",
|
||||
"table.columns.totalCost": "总成本",
|
||||
"table.filter.all": "全部",
|
||||
"table.filter.error": "运行出错",
|
||||
"table.filter.failed": "失败",
|
||||
"table.filter.passed": "通过",
|
||||
"table.filter.running": "运行中",
|
||||
"table.search.placeholder": "搜索用例...",
|
||||
"table.total": "共 {{count}} 条",
|
||||
"testCase.actions.add": "添加数据用例",
|
||||
"testCase.actions.import": "导入数据用例",
|
||||
"testCase.create.advanced": "更多选项",
|
||||
"testCase.create.difficulty.label": "难度",
|
||||
"testCase.create.error": "添加数据用例失败",
|
||||
"testCase.create.expected.label": "期望输出",
|
||||
"testCase.create.expected.placeholder": "输入期望的回答",
|
||||
"testCase.create.expected.required": "请输入期望输出",
|
||||
"testCase.create.input.label": "输入",
|
||||
"testCase.create.input.placeholder": "输入数据用例的问题或输入内容",
|
||||
"testCase.create.success": "数据用例添加成功",
|
||||
"testCase.create.tags.label": "标签",
|
||||
"testCase.create.tags.placeholder": "用逗号分隔的标签(选填)",
|
||||
"testCase.create.title": "添加数据用例",
|
||||
"testCase.delete.confirm": "确定要删除该数据用例吗?",
|
||||
"testCase.delete.error": "删除数据用例失败",
|
||||
"testCase.delete.success": "数据用例已删除",
|
||||
"testCase.edit.error": "更新数据用例失败",
|
||||
"testCase.edit.success": "数据用例更新成功",
|
||||
"testCase.edit.title": "编辑数据用例",
|
||||
"testCase.empty.description": "导入或手动添加数据用例到此数据集",
|
||||
"testCase.empty.title": "暂无数据用例",
|
||||
"testCase.preview.expected": "期望",
|
||||
"testCase.preview.input": "输入",
|
||||
"testCase.preview.title": "数据用例预览",
|
||||
"testCase.search.placeholder": "搜索用例..."
|
||||
}
|
||||
|
|
@ -3,26 +3,27 @@ import { defineConfig } from './src/libs/next/config/define-config';
|
|||
const isVercel = !!process.env.VERCEL_ENV;
|
||||
|
||||
const nextConfig = defineConfig({
|
||||
experimental: {
|
||||
webpackBuildWorker: true,
|
||||
webpackMemoryOptimizations: true,
|
||||
},
|
||||
// Vercel serverless optimization: exclude musl binaries
|
||||
// Vercel serverless optimization: exclude musl binaries and ffmpeg from all routes
|
||||
// Vercel uses Amazon Linux (glibc), not Alpine Linux (musl)
|
||||
// This saves ~45MB (29MB canvas-musl + 16MB sharp-musl)
|
||||
// ffmpeg-static (~76MB) is only needed by /api/webhooks/video/* route
|
||||
// This saves ~120MB (29MB canvas-musl + 16MB sharp-musl + 76MB ffmpeg)
|
||||
outputFileTracingExcludes: isVercel
|
||||
? {
|
||||
'*': [
|
||||
'node_modules/.pnpm/@napi-rs+canvas-*-musl*',
|
||||
'node_modules/.pnpm/@img+sharp-libvips-*musl*',
|
||||
'node_modules/ffmpeg-static/**',
|
||||
'node_modules/.pnpm/ffmpeg-static*/**',
|
||||
],
|
||||
}
|
||||
: undefined,
|
||||
// Include ffmpeg binary for video webhook processing
|
||||
// Include ffmpeg binary only for video webhook processing
|
||||
// refs: https://github.com/vercel-labs/ffmpeg-on-vercel
|
||||
outputFileTracingIncludes: {
|
||||
'/api/webhooks/video/*': ['./node_modules/ffmpeg-static/ffmpeg'],
|
||||
},
|
||||
outputFileTracingIncludes: isVercel
|
||||
? {
|
||||
'/api/webhooks/video/*': ['./node_modules/ffmpeg-static/ffmpeg'],
|
||||
}
|
||||
: undefined,
|
||||
webpack: (webpackConfig, context) => {
|
||||
const { dev } = context;
|
||||
if (!dev) {
|
||||
|
|
|
|||
|
|
@ -199,6 +199,8 @@
|
|||
"@lobechat/builtin-tool-web-browsing": "workspace:*",
|
||||
"@lobechat/business-config": "workspace:*",
|
||||
"@lobechat/business-const": "workspace:*",
|
||||
"@lobechat/eval-dataset-parser": "workspace:*",
|
||||
"@lobechat/eval-rubric": "workspace:*",
|
||||
"@lobechat/config": "workspace:*",
|
||||
"@lobechat/const": "workspace:*",
|
||||
"@lobechat/context-engine": "workspace:*",
|
||||
|
|
|
|||
|
|
@ -434,8 +434,10 @@ export class GeneralChatAgent implements Agent {
|
|||
|
||||
// No tool calls, conversation is complete
|
||||
return {
|
||||
reason: 'completed',
|
||||
reasonDetail: 'LLM response completed without tool calls',
|
||||
reason: state.forceFinish ? 'max_steps_completed' : 'completed',
|
||||
reasonDetail: state.forceFinish
|
||||
? 'Force finish: LLM produced final text response after max steps'
|
||||
: 'LLM response completed without tool calls',
|
||||
type: 'finish',
|
||||
};
|
||||
}
|
||||
|
|
|
|||
|
|
@ -466,6 +466,39 @@ describe('AgentRuntime', () => {
|
|||
});
|
||||
|
||||
expect(result.newState.status).toBe('done');
|
||||
// finish is not a real execution step, should not increment stepCount
|
||||
expect(result.newState.stepCount).toBe(0);
|
||||
});
|
||||
|
||||
it('should not count finish as a step in stepCount', async () => {
|
||||
const agent = new MockAgent();
|
||||
agent.modelRuntime = async function* () {
|
||||
yield { content: 'test response' };
|
||||
};
|
||||
|
||||
agent.runner = vi.fn().mockImplementation((context: AgentRuntimeContext) => {
|
||||
if (context.phase === 'user_input') {
|
||||
return Promise.resolve({ type: 'call_llm', payload: { messages: [] } });
|
||||
}
|
||||
// After LLM result, finish
|
||||
return Promise.resolve({ type: 'finish', reason: 'completed', reasonDetail: 'Done' });
|
||||
});
|
||||
|
||||
const runtime = new AgentRuntime(agent);
|
||||
const state = AgentRuntime.createInitialState({
|
||||
operationId: 'test-session',
|
||||
messages: [{ role: 'user', content: 'Hello' }],
|
||||
});
|
||||
|
||||
// Step 1: call_llm (real work)
|
||||
const result1 = await runtime.step(state, createTestContext('user_input'));
|
||||
expect(result1.newState.stepCount).toBe(1);
|
||||
expect(result1.newState.status).toBe('running');
|
||||
|
||||
// Step 2: finish (not real work)
|
||||
const result2 = await runtime.step(result1.newState, result1.nextContext);
|
||||
expect(result2.newState.stepCount).toBe(1); // should stay at 1, not become 2
|
||||
expect(result2.newState.status).toBe('done');
|
||||
});
|
||||
});
|
||||
});
|
||||
|
|
@ -563,18 +596,17 @@ describe('AgentRuntime', () => {
|
|||
expect(result3.newState.stepCount).toBe(3);
|
||||
expect(result3.newState.status).not.toBe('error');
|
||||
|
||||
// Fourth step - should finish due to maxSteps
|
||||
// Fourth step - exceeds maxSteps, enters forceFinish mode
|
||||
// Instead of immediately stopping, the runtime sets forceFinish=true
|
||||
// and continues execution so the agent can produce a final text response
|
||||
const result4 = await runtime.step(result3.newState, createTestContext('user_input'));
|
||||
expect(result4.newState.stepCount).toBe(4);
|
||||
expect(result4.newState.status).toBe('done');
|
||||
expect(result4.events[0]).toMatchObject({
|
||||
type: 'done',
|
||||
finalState: expect.objectContaining({
|
||||
status: 'done',
|
||||
}),
|
||||
reason: 'max_steps_exceeded',
|
||||
reasonDetail: 'Maximum steps exceeded: 3',
|
||||
});
|
||||
expect(result4.newState.forceFinish).toBe(true);
|
||||
expect(result4.newState.status).toBe('running'); // continues for final LLM call
|
||||
|
||||
// Fifth step - LLM result with no tool calls, agent finishes
|
||||
const result5 = await runtime.step(result4.newState, result4.nextContext!);
|
||||
expect(result5.newState.status).toBe('done');
|
||||
});
|
||||
|
||||
it('should include stepCount in session context', async () => {
|
||||
|
|
@ -1835,6 +1867,7 @@ describe('AgentRuntime', () => {
|
|||
it('should handle LLM errors', async () => {
|
||||
const agent = new MockAgent();
|
||||
agent.modelRuntime = async function* () {
|
||||
yield* []; // satisfy require-yield
|
||||
throw new Error('LLM API error');
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -88,20 +88,14 @@ export class AgentRuntime {
|
|||
|
||||
// Check maximum steps limit
|
||||
if (newState.maxSteps && newState.stepCount > newState.maxSteps) {
|
||||
// Finish execution when maxSteps is exceeded
|
||||
newState.status = 'done';
|
||||
const finishEvent = {
|
||||
finalState: newState,
|
||||
reason: 'max_steps_exceeded' as const,
|
||||
reasonDetail: `Maximum steps exceeded: ${newState.maxSteps}`,
|
||||
type: 'done' as const,
|
||||
};
|
||||
|
||||
return {
|
||||
events: [finishEvent],
|
||||
newState,
|
||||
nextContext: undefined, // No next context when done
|
||||
};
|
||||
if (newState.forceFinish) {
|
||||
// Already in forceFinish flow, skip maxSteps check and continue execution
|
||||
} else {
|
||||
// First time exceeding: set forceFinish flag
|
||||
// Tools will be allowed to complete, but the next LLM call will produce
|
||||
// a final text response (tools stripped, summary prompt injected)
|
||||
newState.forceFinish = true;
|
||||
}
|
||||
}
|
||||
|
||||
// Use provided context or create initial context
|
||||
|
|
@ -164,8 +158,11 @@ export class AgentRuntime {
|
|||
let currentState = newState;
|
||||
const allEvents: AgentEvent[] = [];
|
||||
let finalNextContext: AgentRuntimeContext | undefined = undefined;
|
||||
let hasFinishInstruction = false;
|
||||
|
||||
for (const instruction of normalizedInstructions) {
|
||||
if (instruction.type === 'finish') hasFinishInstruction = true;
|
||||
|
||||
let result;
|
||||
|
||||
// Special handling for batch tool execution
|
||||
|
|
@ -208,6 +205,11 @@ export class AgentRuntime {
|
|||
currentState.stepCount = newState.stepCount;
|
||||
currentState.lastModified = newState.lastModified;
|
||||
|
||||
// A 'finish' instruction is not a real execution step, undo the +1 from the top of step()
|
||||
if (hasFinishInstruction) {
|
||||
currentState.stepCount = Math.max(currentState.stepCount - 1, 0);
|
||||
}
|
||||
|
||||
return {
|
||||
events: allEvents,
|
||||
newState: currentState,
|
||||
|
|
|
|||
|
|
@ -1,4 +1,3 @@
|
|||
/* eslint-disable sort-keys-fix/sort-keys-fix, typescript-sort-keys/interface */
|
||||
import type { ChatToolPayload } from '@lobechat/types';
|
||||
|
||||
import type { AgentState, ToolsCalling } from './state';
|
||||
|
|
@ -63,6 +62,7 @@ export type FinishReason =
|
|||
| 'user_requested' // User requested to end
|
||||
| 'user_aborted' // User abort
|
||||
| 'max_steps_exceeded' // Reached maximum steps limit
|
||||
| 'max_steps_completed' // Completed after reaching max steps (forceFinish)
|
||||
| 'cost_limit_exceeded' // Reached cost limit
|
||||
| 'timeout' // Execution timeout
|
||||
| 'agent_decision' // Agent decided to finish
|
||||
|
|
|
|||
|
|
@ -1,4 +1,3 @@
|
|||
/* eslint-disable sort-keys-fix/sort-keys-fix, typescript-sort-keys/interface */
|
||||
import type {
|
||||
ChatToolPayload,
|
||||
SecurityBlacklistConfig,
|
||||
|
|
@ -26,6 +25,12 @@ export interface AgentState {
|
|||
// --- Metadata ---
|
||||
createdAt: string;
|
||||
error?: any;
|
||||
/**
|
||||
* When true, the agent is in force-finish mode (maxSteps exceeded).
|
||||
* Tools are allowed to complete, but the next LLM call will have tools stripped
|
||||
* and a summary prompt injected to produce a final text response.
|
||||
*/
|
||||
forceFinish?: boolean;
|
||||
// --- Interruption Handling ---
|
||||
/**
|
||||
* When status is 'interrupted', this stores the interruption context
|
||||
|
|
|
|||
|
|
@ -47,6 +47,8 @@ export const SESSION_CHAT_URL = (agentId: string, mobile?: boolean) => {
|
|||
return `/agent/${agentId}`;
|
||||
};
|
||||
|
||||
export const AGENT_PROFILE_URL = (agentId: string) => `/agent/${agentId}/profile`;
|
||||
|
||||
export const GROUP_CHAT_URL = (groupId: string) => `/group/${groupId}`;
|
||||
|
||||
export const LIBRARY_URL = (id: string) => urlJoin('/resource/library', id);
|
||||
|
|
|
|||
|
|
@ -1,4 +1,3 @@
|
|||
/* eslint-disable sort-keys-fix/sort-keys-fix */
|
||||
import debug from 'debug';
|
||||
|
||||
import type { OpenAIChatMessage } from '@/types/index';
|
||||
|
|
@ -23,6 +22,8 @@ import {
|
|||
} from '../../processors';
|
||||
import {
|
||||
AgentBuilderContextInjector,
|
||||
EvalContextSystemInjector,
|
||||
ForceFinishSummaryInjector,
|
||||
GroupAgentBuilderContextInjector,
|
||||
GroupContextInjector,
|
||||
GTDPlanInjector,
|
||||
|
|
@ -115,6 +116,7 @@ export class MessagesEngine {
|
|||
provider,
|
||||
systemRole,
|
||||
inputTemplate,
|
||||
forceFinish,
|
||||
historySummary,
|
||||
formatHistorySummary,
|
||||
knowledge,
|
||||
|
|
@ -123,6 +125,7 @@ export class MessagesEngine {
|
|||
variableGenerators,
|
||||
fileContext,
|
||||
agentBuilderContext,
|
||||
evalContext,
|
||||
groupAgentBuilderContext,
|
||||
agentGroup,
|
||||
gtd,
|
||||
|
|
@ -152,6 +155,9 @@ export class MessagesEngine {
|
|||
// 1. System role injection (agent's system role)
|
||||
new SystemRoleInjector({ systemRole }),
|
||||
|
||||
// 1b. Eval context injection (appends envPrompt to system message)
|
||||
new EvalContextSystemInjector({ enabled: !!evalContext?.envPrompt, evalContext }),
|
||||
|
||||
// =============================================
|
||||
// Phase 2: First User Message Context Injection
|
||||
// These providers inject content before the first user message
|
||||
|
|
@ -323,7 +329,10 @@ export class MessagesEngine {
|
|||
// 24. Tool message reordering
|
||||
new ToolMessageReorder(),
|
||||
|
||||
// 25. Message cleanup (final step, keep only necessary fields)
|
||||
// 25. Force finish summary injection (when maxSteps exceeded, inject summary prompt)
|
||||
new ForceFinishSummaryInjector({ enabled: !!forceFinish }),
|
||||
|
||||
// 26. Message cleanup (final step, keep only necessary fields)
|
||||
new MessageCleanupProcessor(),
|
||||
];
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
/* eslint-disable typescript-sort-keys/interface */
|
||||
/* eslint-disable perfectionist/sort-interfaces */
|
||||
import type { FileContent, KnowledgeBaseInfo, PageContentContext } from '@lobechat/prompts';
|
||||
import type { RuntimeInitialContext, RuntimeStepContext } from '@lobechat/types';
|
||||
|
||||
|
|
@ -6,10 +6,11 @@ import type { OpenAIChatMessage, UIChatMessage } from '@/types/index';
|
|||
|
||||
import type { AgentInfo } from '../../processors/GroupRoleTransform';
|
||||
import type { AgentBuilderContext } from '../../providers/AgentBuilderContextInjector';
|
||||
import type { GTDPlan } from '../../providers/GTDPlanInjector';
|
||||
import type { GTDTodoList } from '../../providers/GTDTodoInjector';
|
||||
import type { EvalContext } from '../../providers/EvalContextSystemInjector';
|
||||
import type { GroupAgentBuilderContext } from '../../providers/GroupAgentBuilderContextInjector';
|
||||
import type { GroupMemberInfo } from '../../providers/GroupContextInjector';
|
||||
import type { GTDPlan } from '../../providers/GTDPlanInjector';
|
||||
import type { GTDTodoList } from '../../providers/GTDTodoInjector';
|
||||
import type { LobeToolManifest } from '../tools/types';
|
||||
|
||||
/**
|
||||
|
|
@ -180,6 +181,8 @@ export interface MessagesEngineParams {
|
|||
// ========== Agent configuration ==========
|
||||
/** Whether to enable history message count limit */
|
||||
enableHistoryCount?: boolean;
|
||||
/** Force finish flag: when true, injects summary prompt for max-steps completion */
|
||||
forceFinish?: boolean;
|
||||
/** Function to format history summary */
|
||||
formatHistorySummary?: (summary: string) => string;
|
||||
/** History message count limit */
|
||||
|
|
@ -212,6 +215,8 @@ export interface MessagesEngineParams {
|
|||
// ========== Extended contexts (both frontend and backend) ==========
|
||||
/** Agent Builder context */
|
||||
agentBuilderContext?: AgentBuilderContext;
|
||||
/** Eval context for injecting environment prompts into system message */
|
||||
evalContext?: EvalContext;
|
||||
/** Agent group configuration for multi-agent scenarios */
|
||||
agentGroup?: AgentGroupConfig;
|
||||
/** Group Agent Builder context */
|
||||
|
|
@ -266,6 +271,7 @@ export interface MessagesEngineResult {
|
|||
|
||||
export { type AgentInfo } from '../../processors/GroupRoleTransform';
|
||||
export { type AgentBuilderContext } from '../../providers/AgentBuilderContextInjector';
|
||||
export { type EvalContext } from '../../providers/EvalContextSystemInjector';
|
||||
export { type GroupAgentBuilderContext } from '../../providers/GroupAgentBuilderContextInjector';
|
||||
export { type GTDPlan } from '../../providers/GTDPlanInjector';
|
||||
export { type GTDTodoItem, type GTDTodoList } from '../../providers/GTDTodoInjector';
|
||||
|
|
|
|||
|
|
@ -0,0 +1,64 @@
|
|||
import debug from 'debug';
|
||||
|
||||
import { BaseProvider } from '../base/BaseProvider';
|
||||
import type { PipelineContext, ProcessorOptions } from '../types';
|
||||
|
||||
const log = debug('context-engine:provider:EvalContextSystemInjector');
|
||||
|
||||
export interface EvalContext {
|
||||
envPrompt?: string;
|
||||
}
|
||||
|
||||
export interface EvalContextSystemInjectorConfig {
|
||||
enabled?: boolean;
|
||||
evalContext?: EvalContext;
|
||||
}
|
||||
|
||||
/**
|
||||
* Eval Context Injector
|
||||
* Appends eval environment prompt to the existing system message,
|
||||
* or creates a new system message if none exists.
|
||||
* Should run after SystemRoleInjector in the pipeline.
|
||||
*/
|
||||
export class EvalContextSystemInjector extends BaseProvider {
|
||||
readonly name = 'EvalContextSystemInjector';
|
||||
|
||||
constructor(
|
||||
private config: EvalContextSystemInjectorConfig,
|
||||
options: ProcessorOptions = {},
|
||||
) {
|
||||
super(options);
|
||||
}
|
||||
|
||||
protected async doProcess(context: PipelineContext): Promise<PipelineContext> {
|
||||
if (!this.config.enabled || !this.config.evalContext?.envPrompt) {
|
||||
log('Disabled or no envPrompt configured, skipping injection');
|
||||
return this.markAsExecuted(context);
|
||||
}
|
||||
|
||||
const clonedContext = this.cloneContext(context);
|
||||
const systemMsgIndex = clonedContext.messages.findIndex((m) => m.role === 'system');
|
||||
|
||||
if (systemMsgIndex >= 0) {
|
||||
const original = clonedContext.messages[systemMsgIndex];
|
||||
clonedContext.messages[systemMsgIndex] = {
|
||||
...original,
|
||||
content: [original.content, this.config.evalContext.envPrompt].filter(Boolean).join('\n\n'),
|
||||
};
|
||||
log('Appended envPrompt to existing system message');
|
||||
} else {
|
||||
clonedContext.messages.unshift({
|
||||
content: this.config.evalContext.envPrompt,
|
||||
createdAt: Date.now(),
|
||||
id: `eval-context-${Date.now()}`,
|
||||
role: 'system' as const,
|
||||
updatedAt: Date.now(),
|
||||
});
|
||||
log('Created new system message with envPrompt');
|
||||
}
|
||||
|
||||
clonedContext.metadata.evalContextInjected = true;
|
||||
|
||||
return this.markAsExecuted(clonedContext);
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,50 @@
|
|||
import debug from 'debug';
|
||||
|
||||
import { BaseProvider } from '../base/BaseProvider';
|
||||
import type { PipelineContext, ProcessorOptions } from '../types';
|
||||
|
||||
const log = debug('context-engine:provider:ForceFinishSummaryInjector');
|
||||
|
||||
export interface ForceFinishSummaryInjectorConfig {
|
||||
enabled: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* Force Finish Summary Injector
|
||||
*
|
||||
* When the agent reaches the maximum step limit (forceFinish mode),
|
||||
* this processor appends a system message instructing the LLM to
|
||||
* summarize progress and produce a final text response without using tools.
|
||||
*
|
||||
* Should run near the end of the pipeline (before MessageCleanup).
|
||||
*/
|
||||
export class ForceFinishSummaryInjector extends BaseProvider {
|
||||
readonly name = 'ForceFinishSummaryInjector';
|
||||
|
||||
constructor(
|
||||
private config: ForceFinishSummaryInjectorConfig,
|
||||
options: ProcessorOptions = {},
|
||||
) {
|
||||
super(options);
|
||||
}
|
||||
|
||||
protected async doProcess(context: PipelineContext): Promise<PipelineContext> {
|
||||
if (!this.config.enabled) {
|
||||
return this.markAsExecuted(context);
|
||||
}
|
||||
|
||||
log('Injecting force-finish summary prompt');
|
||||
|
||||
const clonedContext = this.cloneContext(context);
|
||||
|
||||
clonedContext.messages.push({
|
||||
content:
|
||||
'You have reached the maximum step limit. Please summarize your progress and provide a final response. Do not attempt to use any tools.',
|
||||
role: 'system' as const,
|
||||
});
|
||||
|
||||
clonedContext.metadata.forceFinishInjected = true;
|
||||
|
||||
return this.markAsExecuted(clonedContext);
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,240 @@
|
|||
import { describe, expect, it } from 'vitest';
|
||||
|
||||
import { EvalContextSystemInjector } from '../EvalContextSystemInjector';
|
||||
|
||||
describe('EvalContextSystemInjector', () => {
|
||||
it('should append envPrompt to existing system message', async () => {
|
||||
const provider = new EvalContextSystemInjector({
|
||||
enabled: true,
|
||||
evalContext: { envPrompt: 'You are in a test environment.' },
|
||||
});
|
||||
|
||||
const context = {
|
||||
initialState: {
|
||||
messages: [],
|
||||
model: 'gpt-4',
|
||||
provider: 'openai',
|
||||
systemRole: '',
|
||||
tools: [],
|
||||
},
|
||||
isAborted: false,
|
||||
messages: [
|
||||
{
|
||||
content: 'You are a helpful assistant.',
|
||||
createdAt: Date.now(),
|
||||
id: 'system-1',
|
||||
role: 'system',
|
||||
updatedAt: Date.now(),
|
||||
},
|
||||
{
|
||||
content: 'Hello',
|
||||
createdAt: Date.now(),
|
||||
id: '1',
|
||||
role: 'user',
|
||||
updatedAt: Date.now(),
|
||||
},
|
||||
],
|
||||
metadata: {
|
||||
maxTokens: 4096,
|
||||
model: 'gpt-4',
|
||||
},
|
||||
};
|
||||
|
||||
const result = await provider.process(context);
|
||||
|
||||
expect(result.messages).toHaveLength(2);
|
||||
expect(result.messages[0].content).toBe(
|
||||
'You are a helpful assistant.\n\nYou are in a test environment.',
|
||||
);
|
||||
expect(result.messages[0].role).toBe('system');
|
||||
expect(result.metadata.evalContextInjected).toBe(true);
|
||||
});
|
||||
|
||||
it('should create new system message when none exists', async () => {
|
||||
const provider = new EvalContextSystemInjector({
|
||||
enabled: true,
|
||||
evalContext: { envPrompt: 'You are in a test environment.' },
|
||||
});
|
||||
|
||||
const context = {
|
||||
initialState: {
|
||||
messages: [],
|
||||
model: 'gpt-4',
|
||||
provider: 'openai',
|
||||
systemRole: '',
|
||||
tools: [],
|
||||
},
|
||||
isAborted: false,
|
||||
messages: [
|
||||
{
|
||||
content: 'Hello',
|
||||
createdAt: Date.now(),
|
||||
id: '1',
|
||||
role: 'user',
|
||||
updatedAt: Date.now(),
|
||||
},
|
||||
],
|
||||
metadata: {
|
||||
maxTokens: 4096,
|
||||
model: 'gpt-4',
|
||||
},
|
||||
};
|
||||
|
||||
const result = await provider.process(context);
|
||||
|
||||
expect(result.messages).toHaveLength(2);
|
||||
expect(result.messages[0]).toEqual(
|
||||
expect.objectContaining({
|
||||
content: 'You are in a test environment.',
|
||||
role: 'system',
|
||||
}),
|
||||
);
|
||||
expect(result.messages[1].role).toBe('user');
|
||||
expect(result.metadata.evalContextInjected).toBe(true);
|
||||
});
|
||||
|
||||
it('should skip injection when enabled is false', async () => {
|
||||
const provider = new EvalContextSystemInjector({
|
||||
enabled: false,
|
||||
evalContext: { envPrompt: 'You are in a test environment.' },
|
||||
});
|
||||
|
||||
const context = {
|
||||
initialState: {
|
||||
messages: [],
|
||||
model: 'gpt-4',
|
||||
provider: 'openai',
|
||||
systemRole: '',
|
||||
tools: [],
|
||||
},
|
||||
isAborted: false,
|
||||
messages: [
|
||||
{
|
||||
content: 'Hello',
|
||||
createdAt: Date.now(),
|
||||
id: '1',
|
||||
role: 'user',
|
||||
updatedAt: Date.now(),
|
||||
},
|
||||
],
|
||||
metadata: {
|
||||
maxTokens: 4096,
|
||||
model: 'gpt-4',
|
||||
},
|
||||
};
|
||||
|
||||
const result = await provider.process(context);
|
||||
|
||||
expect(result.messages).toHaveLength(1);
|
||||
expect(result.messages[0].role).toBe('user');
|
||||
expect(result.metadata.evalContextInjected).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should skip injection when envPrompt is empty', async () => {
|
||||
const provider = new EvalContextSystemInjector({
|
||||
enabled: true,
|
||||
evalContext: { envPrompt: '' },
|
||||
});
|
||||
|
||||
const context = {
|
||||
initialState: {
|
||||
messages: [],
|
||||
model: 'gpt-4',
|
||||
provider: 'openai',
|
||||
systemRole: '',
|
||||
tools: [],
|
||||
},
|
||||
isAborted: false,
|
||||
messages: [
|
||||
{
|
||||
content: 'Hello',
|
||||
createdAt: Date.now(),
|
||||
id: '1',
|
||||
role: 'user',
|
||||
updatedAt: Date.now(),
|
||||
},
|
||||
],
|
||||
metadata: {
|
||||
maxTokens: 4096,
|
||||
model: 'gpt-4',
|
||||
},
|
||||
};
|
||||
|
||||
const result = await provider.process(context);
|
||||
|
||||
expect(result.messages).toHaveLength(1);
|
||||
expect(result.messages[0].role).toBe('user');
|
||||
expect(result.metadata.evalContextInjected).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should skip injection when evalContext is undefined', async () => {
|
||||
const provider = new EvalContextSystemInjector({ enabled: true });
|
||||
|
||||
const context = {
|
||||
initialState: {
|
||||
messages: [],
|
||||
model: 'gpt-4',
|
||||
provider: 'openai',
|
||||
systemRole: '',
|
||||
tools: [],
|
||||
},
|
||||
isAborted: false,
|
||||
messages: [
|
||||
{
|
||||
content: 'Hello',
|
||||
createdAt: Date.now(),
|
||||
id: '1',
|
||||
role: 'user',
|
||||
updatedAt: Date.now(),
|
||||
},
|
||||
],
|
||||
metadata: {
|
||||
maxTokens: 4096,
|
||||
model: 'gpt-4',
|
||||
},
|
||||
};
|
||||
|
||||
const result = await provider.process(context);
|
||||
|
||||
expect(result.messages).toHaveLength(1);
|
||||
expect(result.messages[0].role).toBe('user');
|
||||
expect(result.metadata.evalContextInjected).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should not modify original context', async () => {
|
||||
const provider = new EvalContextSystemInjector({
|
||||
enabled: true,
|
||||
evalContext: { envPrompt: 'Test env' },
|
||||
});
|
||||
|
||||
const originalContent = 'Original system role';
|
||||
const context = {
|
||||
initialState: {
|
||||
messages: [],
|
||||
model: 'gpt-4',
|
||||
provider: 'openai',
|
||||
systemRole: '',
|
||||
tools: [],
|
||||
},
|
||||
isAborted: false,
|
||||
messages: [
|
||||
{
|
||||
content: originalContent,
|
||||
createdAt: Date.now(),
|
||||
id: 'system-1',
|
||||
role: 'system',
|
||||
updatedAt: Date.now(),
|
||||
},
|
||||
],
|
||||
metadata: {
|
||||
maxTokens: 4096,
|
||||
model: 'gpt-4',
|
||||
},
|
||||
};
|
||||
|
||||
await provider.process(context);
|
||||
|
||||
expect(context.messages[0].content).toBe(originalContent);
|
||||
expect((context.metadata as any).evalContextInjected).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
|
@ -1,5 +1,7 @@
|
|||
// Context Provider exports
|
||||
export { AgentBuilderContextInjector } from './AgentBuilderContextInjector';
|
||||
export { EvalContextSystemInjector } from './EvalContextSystemInjector';
|
||||
export { ForceFinishSummaryInjector } from './ForceFinishSummaryInjector';
|
||||
export { GroupAgentBuilderContextInjector } from './GroupAgentBuilderContextInjector';
|
||||
export { GroupContextInjector } from './GroupContextInjector';
|
||||
export { GTDPlanInjector } from './GTDPlanInjector';
|
||||
|
|
@ -18,6 +20,8 @@ export type {
|
|||
AgentBuilderContextInjectorConfig,
|
||||
OfficialToolItem,
|
||||
} from './AgentBuilderContextInjector';
|
||||
export type { EvalContext, EvalContextSystemInjectorConfig } from './EvalContextSystemInjector';
|
||||
export type { ForceFinishSummaryInjectorConfig } from './ForceFinishSummaryInjector';
|
||||
export type {
|
||||
GroupAgentBuilderContext,
|
||||
GroupAgentBuilderContextInjectorConfig,
|
||||
|
|
|
|||
|
|
@ -12131,4 +12131,4 @@
|
|||
"schemas": {},
|
||||
"tables": {}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -43,7 +43,7 @@ beforeEach(async () => {
|
|||
]);
|
||||
await trx.insert(files).values({
|
||||
id: 'f1',
|
||||
userId: userId,
|
||||
userId,
|
||||
url: 'abc',
|
||||
name: 'file-1',
|
||||
fileType: 'image/png',
|
||||
|
|
@ -204,6 +204,50 @@ describe('MessageModel Create Tests', () => {
|
|||
expect(pluginResult[0].state!).toMatchObject(state);
|
||||
});
|
||||
|
||||
it('should handle tool message with null bytes (\\u0000) in plugin state/arguments', async () => {
|
||||
// Regression: PostgreSQL rejects \u0000 in text/jsonb columns.
|
||||
// This reproduces a real crash from web search tool returning corrupted Unicode,
|
||||
// e.g. "montée" encoded as "mont\u0000e9e" instead of "mont\u00e9e".
|
||||
const stateWithNullByte = {
|
||||
query: 'Auxerre mont\u0000e Ligue 1',
|
||||
results: [
|
||||
{
|
||||
content: 'Some result with null\u0000byte',
|
||||
url: 'https://example.com',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const argsWithNullByte = `{"query":"Auxerre mont\u0000e9e 2022"}`;
|
||||
|
||||
await expect(
|
||||
messageModel.create({
|
||||
content: 'tool result',
|
||||
plugin: {
|
||||
apiName: 'search',
|
||||
arguments: argsWithNullByte,
|
||||
identifier: 'lobe-web-browsing',
|
||||
type: 'builtin',
|
||||
},
|
||||
pluginState: stateWithNullByte,
|
||||
role: 'tool',
|
||||
tool_call_id: 'call_null_byte_test',
|
||||
sessionId: '1',
|
||||
}),
|
||||
).resolves.toBeDefined();
|
||||
|
||||
// Verify the data was stored and null bytes were handled
|
||||
const pluginResult = await serverDB
|
||||
.select()
|
||||
.from(messagePlugins)
|
||||
.where(eq(messagePlugins.toolCallId, 'call_null_byte_test'));
|
||||
expect(pluginResult).toHaveLength(1);
|
||||
expect(pluginResult[0].identifier).toBe('lobe-web-browsing');
|
||||
// The stored data should not contain null bytes
|
||||
expect(JSON.stringify(pluginResult[0].state)).not.toContain('\u0000');
|
||||
expect(pluginResult[0].arguments).not.toContain('\u0000');
|
||||
});
|
||||
|
||||
describe('create with advanced parameters', () => {
|
||||
it('should create a message with custom ID', async () => {
|
||||
const customId = 'custom-msg-id';
|
||||
|
|
|
|||
|
|
@ -0,0 +1,473 @@
|
|||
import { eq } from 'drizzle-orm';
|
||||
import { afterEach, beforeEach, describe, expect, it } from 'vitest';
|
||||
|
||||
import { getTestDB } from '../../../core/getTestDB';
|
||||
import {
|
||||
agentEvalBenchmarks,
|
||||
agentEvalDatasets,
|
||||
agentEvalRuns,
|
||||
agentEvalTestCases,
|
||||
users,
|
||||
} from '../../../schemas';
|
||||
import { AgentEvalBenchmarkModel } from '../benchmark';
|
||||
|
||||
const serverDB = await getTestDB();
|
||||
|
||||
const userId = 'benchmark-test-user';
|
||||
const userId2 = 'benchmark-test-user-2';
|
||||
const benchmarkModel = new AgentEvalBenchmarkModel(serverDB, userId);
|
||||
|
||||
beforeEach(async () => {
|
||||
await serverDB.delete(agentEvalRuns);
|
||||
await serverDB.delete(agentEvalTestCases);
|
||||
await serverDB.delete(agentEvalDatasets);
|
||||
await serverDB.delete(agentEvalBenchmarks);
|
||||
await serverDB.delete(users);
|
||||
|
||||
// Create test users (needed for runs FK constraint)
|
||||
await serverDB.insert(users).values([{ id: userId }, { id: userId2 }]);
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
await serverDB.delete(agentEvalRuns);
|
||||
await serverDB.delete(agentEvalTestCases);
|
||||
await serverDB.delete(agentEvalDatasets);
|
||||
await serverDB.delete(agentEvalBenchmarks);
|
||||
await serverDB.delete(users);
|
||||
});
|
||||
|
||||
describe('AgentEvalBenchmarkModel', () => {
|
||||
describe('create', () => {
|
||||
it('should create a new benchmark', async () => {
|
||||
const params = {
|
||||
identifier: 'test-benchmark',
|
||||
name: 'Test Benchmark',
|
||||
description: 'Test description',
|
||||
rubrics: [
|
||||
{
|
||||
id: 'rubric-1',
|
||||
name: 'accuracy',
|
||||
type: 'llm-rubric' as const,
|
||||
config: { criteria: 'Measures accuracy' },
|
||||
weight: 1,
|
||||
threshold: 0.7,
|
||||
},
|
||||
],
|
||||
referenceUrl: 'https://example.com',
|
||||
metadata: { version: 1 },
|
||||
isSystem: false,
|
||||
};
|
||||
|
||||
const result = await benchmarkModel.create(params);
|
||||
|
||||
expect(result).toBeDefined();
|
||||
expect(result.identifier).toBe('test-benchmark');
|
||||
expect(result.name).toBe('Test Benchmark');
|
||||
expect(result.description).toBe('Test description');
|
||||
expect(result.rubrics).toEqual(params.rubrics);
|
||||
expect(result.referenceUrl).toBe('https://example.com');
|
||||
expect(result.metadata).toEqual({ version: 1 });
|
||||
expect(result.isSystem).toBe(false);
|
||||
expect(result.createdAt).toBeDefined();
|
||||
expect(result.updatedAt).toBeDefined();
|
||||
});
|
||||
|
||||
it('should create a system benchmark', async () => {
|
||||
const params = {
|
||||
identifier: 'system-benchmark',
|
||||
name: 'System Benchmark',
|
||||
rubrics: [],
|
||||
isSystem: true,
|
||||
};
|
||||
|
||||
const result = await benchmarkModel.create(params);
|
||||
|
||||
expect(result.isSystem).toBe(true);
|
||||
expect(result.identifier).toBe('system-benchmark');
|
||||
});
|
||||
});
|
||||
|
||||
describe('delete', () => {
|
||||
it('should delete a user-created benchmark', async () => {
|
||||
const [benchmark] = await serverDB
|
||||
.insert(agentEvalBenchmarks)
|
||||
.values({
|
||||
identifier: 'delete-test',
|
||||
name: 'Delete Test',
|
||||
rubrics: [],
|
||||
|
||||
isSystem: false,
|
||||
})
|
||||
.returning();
|
||||
|
||||
await benchmarkModel.delete(benchmark.id);
|
||||
|
||||
const deleted = await serverDB.query.agentEvalBenchmarks.findFirst({
|
||||
where: eq(agentEvalBenchmarks.id, benchmark.id),
|
||||
});
|
||||
expect(deleted).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should not delete a system benchmark', async () => {
|
||||
const [systemBenchmark] = await serverDB
|
||||
.insert(agentEvalBenchmarks)
|
||||
.values({
|
||||
identifier: 'system-benchmark',
|
||||
name: 'System Benchmark',
|
||||
rubrics: [],
|
||||
|
||||
isSystem: true,
|
||||
})
|
||||
.returning();
|
||||
|
||||
await benchmarkModel.delete(systemBenchmark.id);
|
||||
|
||||
const stillExists = await serverDB.query.agentEvalBenchmarks.findFirst({
|
||||
where: eq(agentEvalBenchmarks.id, systemBenchmark.id),
|
||||
});
|
||||
expect(stillExists).toBeDefined();
|
||||
});
|
||||
|
||||
it('should return 0 rowCount when benchmark not found', async () => {
|
||||
await benchmarkModel.delete('non-existent-id');
|
||||
// No rowCount in PGlite, just verify no error
|
||||
});
|
||||
});
|
||||
|
||||
describe('query', () => {
|
||||
beforeEach(async () => {
|
||||
await serverDB.insert(agentEvalBenchmarks).values([
|
||||
{
|
||||
identifier: 'system-1',
|
||||
name: 'System 1',
|
||||
rubrics: [],
|
||||
|
||||
isSystem: true,
|
||||
},
|
||||
{
|
||||
identifier: 'user-1',
|
||||
name: 'User 1',
|
||||
rubrics: [],
|
||||
|
||||
isSystem: false,
|
||||
},
|
||||
{
|
||||
identifier: 'system-2',
|
||||
name: 'System 2',
|
||||
rubrics: [],
|
||||
|
||||
isSystem: true,
|
||||
},
|
||||
]);
|
||||
});
|
||||
|
||||
it('should query all benchmarks including system', async () => {
|
||||
const results = await benchmarkModel.query(true);
|
||||
|
||||
expect(results).toHaveLength(3);
|
||||
expect(results.map((r) => r.identifier)).toContain('system-1');
|
||||
expect(results.map((r) => r.identifier)).toContain('user-1');
|
||||
expect(results.map((r) => r.identifier)).toContain('system-2');
|
||||
});
|
||||
|
||||
it('should query only user-created benchmarks', async () => {
|
||||
const results = await benchmarkModel.query(false);
|
||||
|
||||
expect(results).toHaveLength(1);
|
||||
expect(results[0].identifier).toBe('user-1');
|
||||
expect(results[0].isSystem).toBe(false);
|
||||
});
|
||||
|
||||
it('should default to including system benchmarks', async () => {
|
||||
const results = await benchmarkModel.query();
|
||||
|
||||
expect(results).toHaveLength(3);
|
||||
});
|
||||
|
||||
it('should order by createdAt descending', async () => {
|
||||
const results = await benchmarkModel.query(true);
|
||||
|
||||
// 最新的应该在前面
|
||||
// Order may vary in PGlite due to timing
|
||||
expect(results.length).toBeGreaterThanOrEqual(3);
|
||||
});
|
||||
|
||||
it('should return datasetCount for benchmarks with datasets', async () => {
|
||||
// Find the user-1 benchmark
|
||||
const benchmarks = await serverDB.query.agentEvalBenchmarks.findMany();
|
||||
const userBenchmark = benchmarks.find((b) => b.identifier === 'user-1')!;
|
||||
|
||||
// Add 2 datasets to it
|
||||
await serverDB.insert(agentEvalDatasets).values([
|
||||
{
|
||||
benchmarkId: userBenchmark.id,
|
||||
identifier: 'ds-1',
|
||||
name: 'Dataset 1',
|
||||
userId,
|
||||
},
|
||||
{
|
||||
benchmarkId: userBenchmark.id,
|
||||
identifier: 'ds-2',
|
||||
name: 'Dataset 2',
|
||||
userId,
|
||||
},
|
||||
]);
|
||||
|
||||
const results = await benchmarkModel.query(true);
|
||||
const result = results.find((r) => r.identifier === 'user-1')!;
|
||||
|
||||
expect(result.datasetCount).toBe(2);
|
||||
});
|
||||
|
||||
it('should return testCaseCount for benchmarks with test cases', async () => {
|
||||
const benchmarks = await serverDB.query.agentEvalBenchmarks.findMany();
|
||||
const userBenchmark = benchmarks.find((b) => b.identifier === 'user-1')!;
|
||||
|
||||
// Add a dataset
|
||||
const [dataset] = await serverDB
|
||||
.insert(agentEvalDatasets)
|
||||
.values({
|
||||
benchmarkId: userBenchmark.id,
|
||||
identifier: 'ds-for-cases',
|
||||
name: 'Dataset for Cases',
|
||||
userId,
|
||||
})
|
||||
.returning();
|
||||
|
||||
// Add 3 test cases to the dataset
|
||||
await serverDB.insert(agentEvalTestCases).values([
|
||||
{ datasetId: dataset.id, content: { input: 'test' }, sortOrder: 1, userId },
|
||||
{ datasetId: dataset.id, content: { input: 'test' }, sortOrder: 2, userId },
|
||||
{ datasetId: dataset.id, content: { input: 'test' }, sortOrder: 3, userId },
|
||||
]);
|
||||
|
||||
const results = await benchmarkModel.query(true);
|
||||
const result = results.find((r) => r.identifier === 'user-1')!;
|
||||
|
||||
expect(result.testCaseCount).toBe(3);
|
||||
});
|
||||
|
||||
it('should return runCount for benchmarks with runs', async () => {
|
||||
const benchmarks = await serverDB.query.agentEvalBenchmarks.findMany();
|
||||
const userBenchmark = benchmarks.find((b) => b.identifier === 'user-1')!;
|
||||
|
||||
// Add a dataset
|
||||
const [dataset] = await serverDB
|
||||
.insert(agentEvalDatasets)
|
||||
.values({
|
||||
benchmarkId: userBenchmark.id,
|
||||
identifier: 'ds-for-runs',
|
||||
name: 'Dataset for Runs',
|
||||
userId,
|
||||
})
|
||||
.returning();
|
||||
|
||||
// Add 2 runs
|
||||
await serverDB.insert(agentEvalRuns).values([
|
||||
{ datasetId: dataset.id, userId, status: 'idle' },
|
||||
{ datasetId: dataset.id, userId, status: 'idle' },
|
||||
]);
|
||||
|
||||
const results = await benchmarkModel.query(true);
|
||||
const result = results.find((r) => r.identifier === 'user-1')!;
|
||||
|
||||
expect(result.runCount).toBe(2);
|
||||
});
|
||||
|
||||
it('should only count runs belonging to the current user in runCount', async () => {
|
||||
const benchmarks = await serverDB.query.agentEvalBenchmarks.findMany();
|
||||
const userBenchmark = benchmarks.find((b) => b.identifier === 'user-1')!;
|
||||
|
||||
const [dataset] = await serverDB
|
||||
.insert(agentEvalDatasets)
|
||||
.values({
|
||||
benchmarkId: userBenchmark.id,
|
||||
identifier: 'ds-isolation',
|
||||
name: 'Dataset Isolation',
|
||||
userId,
|
||||
})
|
||||
.returning();
|
||||
|
||||
// Add runs for current user and another user
|
||||
await serverDB.insert(agentEvalRuns).values([
|
||||
{ datasetId: dataset.id, userId, status: 'idle' },
|
||||
{ datasetId: dataset.id, userId, status: 'completed' },
|
||||
{ datasetId: dataset.id, userId: userId2, status: 'idle' },
|
||||
{ datasetId: dataset.id, userId: userId2, status: 'completed' },
|
||||
{ datasetId: dataset.id, userId: userId2, status: 'running' },
|
||||
]);
|
||||
|
||||
const results = await benchmarkModel.query(true);
|
||||
const result = results.find((r) => r.identifier === 'user-1')!;
|
||||
|
||||
// Should only count the 2 runs from the current user
|
||||
expect(result.runCount).toBe(2);
|
||||
});
|
||||
|
||||
it('should only return recentRuns belonging to the current user', async () => {
|
||||
const benchmarks = await serverDB.query.agentEvalBenchmarks.findMany();
|
||||
const userBenchmark = benchmarks.find((b) => b.identifier === 'user-1')!;
|
||||
|
||||
const [dataset] = await serverDB
|
||||
.insert(agentEvalDatasets)
|
||||
.values({
|
||||
benchmarkId: userBenchmark.id,
|
||||
identifier: 'ds-recent-isolation',
|
||||
name: 'Dataset Recent Isolation',
|
||||
userId,
|
||||
})
|
||||
.returning();
|
||||
|
||||
// Add runs for both users
|
||||
const [myRun] = await serverDB
|
||||
.insert(agentEvalRuns)
|
||||
.values([
|
||||
{ datasetId: dataset.id, userId, status: 'completed', name: 'My Run' },
|
||||
{ datasetId: dataset.id, userId: userId2, status: 'completed', name: 'Other Run' },
|
||||
])
|
||||
.returning();
|
||||
|
||||
const results = await benchmarkModel.query(true);
|
||||
const result = results.find((r) => r.identifier === 'user-1')!;
|
||||
|
||||
// Should only include the current user's runs
|
||||
expect(result.recentRuns).toHaveLength(1);
|
||||
expect(result.recentRuns[0].userId).toBe(userId);
|
||||
expect(result.recentRuns[0].name).toBe('My Run');
|
||||
});
|
||||
|
||||
it('should return 0 counts for benchmarks without related data', async () => {
|
||||
const results = await benchmarkModel.query(true);
|
||||
const result = results.find((r) => r.identifier === 'user-1')!;
|
||||
|
||||
expect(result.datasetCount).toBe(0);
|
||||
expect(result.testCaseCount).toBe(0);
|
||||
expect(result.runCount).toBe(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('findById', () => {
|
||||
it('should find a benchmark by id', async () => {
|
||||
const [benchmark] = await serverDB
|
||||
.insert(agentEvalBenchmarks)
|
||||
.values({
|
||||
identifier: 'find-test',
|
||||
name: 'Find Test',
|
||||
rubrics: [],
|
||||
|
||||
isSystem: false,
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await benchmarkModel.findById(benchmark.id);
|
||||
|
||||
expect(result).toBeDefined();
|
||||
expect(result?.id).toBe(benchmark.id);
|
||||
expect(result?.identifier).toBe('find-test');
|
||||
});
|
||||
|
||||
it('should return undefined when benchmark not found', async () => {
|
||||
const result = await benchmarkModel.findById('non-existent-id');
|
||||
expect(result).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('findByIdentifier', () => {
|
||||
it('should find a benchmark by identifier', async () => {
|
||||
await serverDB.insert(agentEvalBenchmarks).values({
|
||||
identifier: 'unique-identifier',
|
||||
name: 'Unique Test',
|
||||
rubrics: [],
|
||||
isSystem: false,
|
||||
});
|
||||
|
||||
const result = await benchmarkModel.findByIdentifier('unique-identifier');
|
||||
|
||||
expect(result).toBeDefined();
|
||||
expect(result?.identifier).toBe('unique-identifier');
|
||||
expect(result?.name).toBe('Unique Test');
|
||||
});
|
||||
|
||||
it('should return undefined when identifier not found', async () => {
|
||||
const result = await benchmarkModel.findByIdentifier('non-existent');
|
||||
expect(result).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('update', () => {
|
||||
it('should update a user-created benchmark', async () => {
|
||||
const [benchmark] = await serverDB
|
||||
.insert(agentEvalBenchmarks)
|
||||
.values({
|
||||
identifier: 'update-test',
|
||||
name: 'Original Name',
|
||||
rubrics: [],
|
||||
|
||||
isSystem: false,
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await benchmarkModel.update(benchmark.id, {
|
||||
name: 'Updated Name',
|
||||
description: 'New description',
|
||||
});
|
||||
|
||||
expect(result).toBeDefined();
|
||||
expect(result?.name).toBe('Updated Name');
|
||||
expect(result?.description).toBe('New description');
|
||||
expect(result?.updatedAt).toBeDefined();
|
||||
expect(result?.updatedAt.getTime()).toBeGreaterThanOrEqual(result!.createdAt.getTime());
|
||||
});
|
||||
|
||||
it('should not update a system benchmark', async () => {
|
||||
const [systemBenchmark] = await serverDB
|
||||
.insert(agentEvalBenchmarks)
|
||||
.values({
|
||||
identifier: 'system-benchmark',
|
||||
name: 'System Benchmark',
|
||||
rubrics: [],
|
||||
|
||||
isSystem: true,
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await benchmarkModel.update(systemBenchmark.id, {
|
||||
name: 'Attempted Update',
|
||||
});
|
||||
|
||||
expect(result).toBeUndefined();
|
||||
|
||||
const unchanged = await benchmarkModel.findById(systemBenchmark.id);
|
||||
expect(unchanged?.name).toBe('System Benchmark');
|
||||
});
|
||||
|
||||
it('should return undefined when benchmark not found', async () => {
|
||||
const result = await benchmarkModel.update('non-existent-id', {
|
||||
name: 'New Name',
|
||||
});
|
||||
|
||||
expect(result).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should update only specified fields', async () => {
|
||||
const [benchmark] = await serverDB
|
||||
.insert(agentEvalBenchmarks)
|
||||
.values({
|
||||
identifier: 'partial-update',
|
||||
name: 'Original',
|
||||
description: 'Original Desc',
|
||||
rubrics: [],
|
||||
|
||||
isSystem: false,
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await benchmarkModel.update(benchmark.id, {
|
||||
name: 'Only Name Changed',
|
||||
});
|
||||
|
||||
expect(result?.name).toBe('Only Name Changed');
|
||||
expect(result?.description).toBe('Original Desc');
|
||||
});
|
||||
});
|
||||
});
|
||||
399
packages/database/src/models/agentEval/__tests__/dataset.test.ts
Normal file
399
packages/database/src/models/agentEval/__tests__/dataset.test.ts
Normal file
|
|
@ -0,0 +1,399 @@
|
|||
import { eq } from 'drizzle-orm';
|
||||
import { afterEach, beforeEach, describe, expect, it } from 'vitest';
|
||||
|
||||
import { getTestDB } from '../../../core/getTestDB';
|
||||
import {
|
||||
agentEvalBenchmarks,
|
||||
agentEvalDatasets,
|
||||
agentEvalTestCases,
|
||||
users,
|
||||
} from '../../../schemas';
|
||||
import { AgentEvalDatasetModel } from '../dataset';
|
||||
|
||||
const serverDB = await getTestDB();
|
||||
|
||||
const userId = 'dataset-test-user';
|
||||
const userId2 = 'dataset-test-user-2';
|
||||
const datasetModel = new AgentEvalDatasetModel(serverDB, userId);
|
||||
|
||||
let benchmarkId: string;
|
||||
|
||||
beforeEach(async () => {
|
||||
await serverDB.delete(agentEvalTestCases);
|
||||
await serverDB.delete(agentEvalDatasets);
|
||||
await serverDB.delete(agentEvalBenchmarks);
|
||||
await serverDB.delete(users);
|
||||
|
||||
// Create test users
|
||||
await serverDB.insert(users).values([{ id: userId }, { id: userId2 }]);
|
||||
|
||||
// Create a test benchmark
|
||||
const [benchmark] = await serverDB
|
||||
.insert(agentEvalBenchmarks)
|
||||
.values({
|
||||
identifier: 'test-benchmark',
|
||||
name: 'Test Benchmark',
|
||||
rubrics: [],
|
||||
|
||||
isSystem: false,
|
||||
})
|
||||
.returning();
|
||||
benchmarkId = benchmark.id;
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
await serverDB.delete(agentEvalTestCases);
|
||||
await serverDB.delete(agentEvalDatasets);
|
||||
await serverDB.delete(agentEvalBenchmarks);
|
||||
await serverDB.delete(users);
|
||||
});
|
||||
|
||||
describe('AgentEvalDatasetModel', () => {
|
||||
describe('create', () => {
|
||||
it('should create a new dataset with userId', async () => {
|
||||
const params = {
|
||||
benchmarkId,
|
||||
identifier: 'test-dataset',
|
||||
name: 'Test Dataset',
|
||||
description: 'Test description',
|
||||
metadata: { version: 1 },
|
||||
};
|
||||
|
||||
const result = await datasetModel.create(params);
|
||||
|
||||
expect(result).toBeDefined();
|
||||
expect(result.benchmarkId).toBe(benchmarkId);
|
||||
expect(result.identifier).toBe('test-dataset');
|
||||
expect(result.name).toBe('Test Dataset');
|
||||
expect(result.description).toBe('Test description');
|
||||
expect(result.metadata).toEqual({ version: 1 });
|
||||
expect(result.userId).toBe(userId);
|
||||
expect(result.createdAt).toBeDefined();
|
||||
expect(result.updatedAt).toBeDefined();
|
||||
});
|
||||
|
||||
it('should create a dataset with minimal parameters', async () => {
|
||||
const params = {
|
||||
benchmarkId,
|
||||
identifier: 'minimal-dataset',
|
||||
name: 'Minimal Dataset',
|
||||
};
|
||||
|
||||
const result = await datasetModel.create(params);
|
||||
|
||||
expect(result).toBeDefined();
|
||||
expect(result.identifier).toBe('minimal-dataset');
|
||||
expect(result.userId).toBe(userId);
|
||||
});
|
||||
});
|
||||
|
||||
describe('delete', () => {
|
||||
it('should delete a dataset owned by the user', async () => {
|
||||
const [dataset] = await serverDB
|
||||
.insert(agentEvalDatasets)
|
||||
.values({
|
||||
benchmarkId,
|
||||
identifier: 'delete-test',
|
||||
name: 'Delete Test',
|
||||
userId,
|
||||
})
|
||||
.returning();
|
||||
|
||||
await datasetModel.delete(dataset.id);
|
||||
|
||||
const deleted = await serverDB.query.agentEvalDatasets.findFirst({
|
||||
where: eq(agentEvalDatasets.id, dataset.id),
|
||||
});
|
||||
expect(deleted).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should not delete a dataset owned by another user', async () => {
|
||||
const [dataset] = await serverDB
|
||||
.insert(agentEvalDatasets)
|
||||
.values({
|
||||
benchmarkId,
|
||||
identifier: 'other-user-dataset',
|
||||
name: 'Other User Dataset',
|
||||
userId: userId2,
|
||||
})
|
||||
.returning();
|
||||
|
||||
await datasetModel.delete(dataset.id);
|
||||
|
||||
const stillExists = await serverDB.query.agentEvalDatasets.findFirst({
|
||||
where: eq(agentEvalDatasets.id, dataset.id),
|
||||
});
|
||||
expect(stillExists).toBeDefined();
|
||||
});
|
||||
|
||||
it('should return 0 rowCount when dataset not found', async () => {
|
||||
await datasetModel.delete('non-existent-id');
|
||||
// No rowCount in PGlite
|
||||
});
|
||||
});
|
||||
|
||||
describe('query', () => {
|
||||
beforeEach(async () => {
|
||||
// Create another benchmark
|
||||
const [benchmark2] = await serverDB
|
||||
.insert(agentEvalBenchmarks)
|
||||
.values({
|
||||
identifier: 'benchmark-2',
|
||||
name: 'Benchmark 2',
|
||||
rubrics: [],
|
||||
|
||||
isSystem: false,
|
||||
})
|
||||
.returning();
|
||||
|
||||
// Insert datasets
|
||||
await serverDB.insert(agentEvalDatasets).values([
|
||||
{
|
||||
benchmarkId,
|
||||
identifier: 'user-dataset-1',
|
||||
name: 'User Dataset 1',
|
||||
userId,
|
||||
},
|
||||
{
|
||||
benchmarkId: benchmark2.id,
|
||||
identifier: 'user-dataset-2',
|
||||
name: 'User Dataset 2',
|
||||
userId,
|
||||
},
|
||||
{
|
||||
benchmarkId,
|
||||
identifier: 'system-dataset',
|
||||
name: 'System Dataset',
|
||||
userId: null, // System dataset
|
||||
},
|
||||
{
|
||||
benchmarkId,
|
||||
identifier: 'other-user-dataset',
|
||||
name: 'Other User Dataset',
|
||||
userId: userId2,
|
||||
},
|
||||
]);
|
||||
});
|
||||
|
||||
it('should query all datasets (user + system)', async () => {
|
||||
const results = await datasetModel.query();
|
||||
|
||||
expect(results).toHaveLength(3); // user-dataset-1, user-dataset-2, system-dataset
|
||||
expect(results.map((r) => r.identifier)).toContain('user-dataset-1');
|
||||
expect(results.map((r) => r.identifier)).toContain('user-dataset-2');
|
||||
expect(results.map((r) => r.identifier)).toContain('system-dataset');
|
||||
expect(results.map((r) => r.identifier)).not.toContain('other-user-dataset');
|
||||
});
|
||||
|
||||
it('should query datasets by benchmarkId', async () => {
|
||||
const results = await datasetModel.query(benchmarkId);
|
||||
|
||||
expect(results).toHaveLength(2); // user-dataset-1, system-dataset
|
||||
expect(results.every((r) => r.benchmarkId === benchmarkId)).toBe(true);
|
||||
});
|
||||
|
||||
it('should order by createdAt descending', async () => {
|
||||
const results = await datasetModel.query();
|
||||
|
||||
// 最新的应该在前面
|
||||
// Order may vary, just check we got results
|
||||
expect(results.length).toBeGreaterThanOrEqual(2);
|
||||
});
|
||||
|
||||
it('should include system datasets (userId is null)', async () => {
|
||||
const results = await datasetModel.query();
|
||||
|
||||
const systemDataset = results.find((r) => r.identifier === 'system-dataset');
|
||||
expect(systemDataset).toBeDefined();
|
||||
expect(systemDataset?.userId).toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
describe('findById', () => {
|
||||
it('should find a dataset by id (user-owned)', async () => {
|
||||
const [dataset] = await serverDB
|
||||
.insert(agentEvalDatasets)
|
||||
.values({
|
||||
benchmarkId,
|
||||
identifier: 'find-test',
|
||||
name: 'Find Test',
|
||||
userId,
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await datasetModel.findById(dataset.id);
|
||||
|
||||
expect(result).toBeDefined();
|
||||
expect(result?.id).toBe(dataset.id);
|
||||
expect(result?.identifier).toBe('find-test');
|
||||
});
|
||||
|
||||
it('should find a system dataset', async () => {
|
||||
const [systemDataset] = await serverDB
|
||||
.insert(agentEvalDatasets)
|
||||
.values({
|
||||
benchmarkId,
|
||||
identifier: 'system-dataset',
|
||||
name: 'System Dataset',
|
||||
userId: null,
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await datasetModel.findById(systemDataset.id);
|
||||
|
||||
expect(result).toBeDefined();
|
||||
expect(result?.userId).toBeNull();
|
||||
});
|
||||
|
||||
it('should not find a dataset owned by another user', async () => {
|
||||
const [otherDataset] = await serverDB
|
||||
.insert(agentEvalDatasets)
|
||||
.values({
|
||||
benchmarkId,
|
||||
identifier: 'other-dataset',
|
||||
name: 'Other Dataset',
|
||||
userId: userId2,
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await datasetModel.findById(otherDataset.id);
|
||||
|
||||
expect(result).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should return dataset with test cases', async () => {
|
||||
const [dataset] = await serverDB
|
||||
.insert(agentEvalDatasets)
|
||||
.values({
|
||||
benchmarkId,
|
||||
identifier: 'with-cases',
|
||||
name: 'With Cases',
|
||||
userId,
|
||||
})
|
||||
.returning();
|
||||
|
||||
// Add test cases
|
||||
await serverDB.insert(agentEvalTestCases).values([
|
||||
{
|
||||
datasetId: dataset.id,
|
||||
content: { input: 'Test 1' },
|
||||
sortOrder: 1,
|
||||
userId,
|
||||
},
|
||||
{
|
||||
datasetId: dataset.id,
|
||||
content: { input: 'Test 2' },
|
||||
sortOrder: 2,
|
||||
userId,
|
||||
},
|
||||
]);
|
||||
|
||||
const result = await datasetModel.findById(dataset.id);
|
||||
|
||||
expect(result).toBeDefined();
|
||||
expect(result?.testCases).toHaveLength(2);
|
||||
expect(result?.testCases[0].sortOrder).toBe(1);
|
||||
expect(result?.testCases[1].sortOrder).toBe(2);
|
||||
});
|
||||
|
||||
it('should return undefined when dataset not found', async () => {
|
||||
const result = await datasetModel.findById('non-existent-id');
|
||||
expect(result).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('update', () => {
|
||||
it('should update a dataset owned by the user', async () => {
|
||||
const [dataset] = await serverDB
|
||||
.insert(agentEvalDatasets)
|
||||
.values({
|
||||
benchmarkId,
|
||||
identifier: 'update-test',
|
||||
name: 'Original Name',
|
||||
userId,
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await datasetModel.update(dataset.id, {
|
||||
name: 'Updated Name',
|
||||
description: 'New description',
|
||||
});
|
||||
|
||||
expect(result).toBeDefined();
|
||||
expect(result?.name).toBe('Updated Name');
|
||||
expect(result?.description).toBe('New description');
|
||||
expect(result?.updatedAt).toBeDefined();
|
||||
expect(result?.updatedAt.getTime()).toBeGreaterThanOrEqual(result!.createdAt.getTime());
|
||||
});
|
||||
|
||||
it('should not update a dataset owned by another user', async () => {
|
||||
const [dataset] = await serverDB
|
||||
.insert(agentEvalDatasets)
|
||||
.values({
|
||||
benchmarkId,
|
||||
identifier: 'other-dataset',
|
||||
name: 'Other Dataset',
|
||||
userId: userId2,
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await datasetModel.update(dataset.id, {
|
||||
name: 'Attempted Update',
|
||||
});
|
||||
|
||||
expect(result).toBeUndefined();
|
||||
|
||||
const unchanged = await serverDB.query.agentEvalDatasets.findFirst({
|
||||
where: eq(agentEvalDatasets.id, dataset.id),
|
||||
});
|
||||
expect(unchanged?.name).toBe('Other Dataset');
|
||||
});
|
||||
|
||||
it('should return undefined when dataset not found', async () => {
|
||||
const result = await datasetModel.update('non-existent-id', {
|
||||
name: 'New Name',
|
||||
});
|
||||
|
||||
expect(result).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should update only specified fields', async () => {
|
||||
const [dataset] = await serverDB
|
||||
.insert(agentEvalDatasets)
|
||||
.values({
|
||||
benchmarkId,
|
||||
identifier: 'partial-update',
|
||||
name: 'Original',
|
||||
description: 'Original Desc',
|
||||
userId,
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await datasetModel.update(dataset.id, {
|
||||
name: 'Only Name Changed',
|
||||
});
|
||||
|
||||
expect(result?.name).toBe('Only Name Changed');
|
||||
expect(result?.description).toBe('Original Desc');
|
||||
});
|
||||
|
||||
it('should update metadata', async () => {
|
||||
const [dataset] = await serverDB
|
||||
.insert(agentEvalDatasets)
|
||||
.values({
|
||||
benchmarkId,
|
||||
identifier: 'metadata-update',
|
||||
name: 'Metadata Test',
|
||||
metadata: { version: 1 },
|
||||
userId,
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await datasetModel.update(dataset.id, {
|
||||
metadata: { version: 2, updated: true },
|
||||
});
|
||||
|
||||
expect(result?.metadata).toEqual({ version: 2, updated: true });
|
||||
});
|
||||
});
|
||||
});
|
||||
513
packages/database/src/models/agentEval/__tests__/run.test.ts
Normal file
513
packages/database/src/models/agentEval/__tests__/run.test.ts
Normal file
|
|
@ -0,0 +1,513 @@
|
|||
import { eq } from 'drizzle-orm';
|
||||
import { afterEach, beforeEach, describe, expect, it } from 'vitest';
|
||||
|
||||
import { getTestDB } from '../../../core/getTestDB';
|
||||
import {
|
||||
agentEvalBenchmarks,
|
||||
agentEvalDatasets,
|
||||
agentEvalRuns,
|
||||
agentEvalTestCases,
|
||||
users,
|
||||
} from '../../../schemas';
|
||||
import { AgentEvalRunModel } from '../run';
|
||||
|
||||
let serverDB = await getTestDB();
|
||||
|
||||
const userId = 'run-test-user';
|
||||
const userId2 = 'run-test-user-2';
|
||||
const runModel = new AgentEvalRunModel(serverDB, userId);
|
||||
|
||||
let benchmarkId: string;
|
||||
let datasetId: string;
|
||||
|
||||
beforeEach(async () => {
|
||||
await serverDB.delete(agentEvalRuns);
|
||||
await serverDB.delete(agentEvalTestCases);
|
||||
await serverDB.delete(agentEvalDatasets);
|
||||
await serverDB.delete(agentEvalBenchmarks);
|
||||
await serverDB.delete(users);
|
||||
|
||||
// Create test users
|
||||
await serverDB.insert(users).values([{ id: userId }, { id: userId2 }]);
|
||||
|
||||
// Create a test benchmark
|
||||
const [benchmark] = await serverDB
|
||||
.insert(agentEvalBenchmarks)
|
||||
.values({
|
||||
identifier: 'test-benchmark',
|
||||
name: 'Test Benchmark',
|
||||
rubrics: [],
|
||||
isSystem: false,
|
||||
})
|
||||
.returning();
|
||||
benchmarkId = benchmark.id;
|
||||
|
||||
// Create a test dataset
|
||||
const [dataset] = await serverDB
|
||||
.insert(agentEvalDatasets)
|
||||
.values({
|
||||
benchmarkId,
|
||||
identifier: 'test-dataset',
|
||||
name: 'Test Dataset',
|
||||
userId,
|
||||
})
|
||||
.returning();
|
||||
datasetId = dataset.id;
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
await serverDB.delete(agentEvalRuns);
|
||||
await serverDB.delete(agentEvalTestCases);
|
||||
await serverDB.delete(agentEvalDatasets);
|
||||
await serverDB.delete(agentEvalBenchmarks);
|
||||
await serverDB.delete(users);
|
||||
});
|
||||
|
||||
describe('AgentEvalRunModel', () => {
|
||||
describe('create', () => {
|
||||
it('should create a new run with minimal parameters', async () => {
|
||||
const params = {
|
||||
datasetId,
|
||||
};
|
||||
|
||||
const result = await runModel.create(params);
|
||||
|
||||
expect(result).toBeDefined();
|
||||
expect(result.datasetId).toBe(datasetId);
|
||||
expect(result.userId).toBe(userId);
|
||||
expect(result.status).toBe('idle');
|
||||
expect(result.name).toBeNull();
|
||||
expect(result.targetAgentId).toBeNull();
|
||||
expect(result.config).toBeNull();
|
||||
expect(result.metrics).toBeNull();
|
||||
expect(result.createdAt).toBeDefined();
|
||||
expect(result.updatedAt).toBeDefined();
|
||||
});
|
||||
|
||||
it('should create a run with all parameters', async () => {
|
||||
const params = {
|
||||
datasetId,
|
||||
name: 'Test Run',
|
||||
status: 'pending' as const,
|
||||
config: {
|
||||
concurrency: 5,
|
||||
timeout: 300000,
|
||||
},
|
||||
metrics: {
|
||||
totalCases: 10,
|
||||
passedCases: 0,
|
||||
failedCases: 0,
|
||||
averageScore: 0,
|
||||
passRate: 0,
|
||||
},
|
||||
};
|
||||
|
||||
const result = await runModel.create(params);
|
||||
|
||||
expect(result).toBeDefined();
|
||||
expect(result.datasetId).toBe(datasetId);
|
||||
expect(result.name).toBe('Test Run');
|
||||
expect(result.status).toBe('pending');
|
||||
expect(result.config).toEqual({ concurrency: 5, timeout: 300000 });
|
||||
expect(result.metrics).toMatchObject({
|
||||
totalCases: 10,
|
||||
passedCases: 0,
|
||||
failedCases: 0,
|
||||
averageScore: 0,
|
||||
passRate: 0,
|
||||
});
|
||||
});
|
||||
|
||||
it('should default status to idle', async () => {
|
||||
const result = await runModel.create({ datasetId });
|
||||
|
||||
expect(result.status).toBe('idle');
|
||||
});
|
||||
});
|
||||
|
||||
describe('query', () => {
|
||||
beforeEach(async () => {
|
||||
// Create another dataset
|
||||
const [dataset2] = await serverDB
|
||||
.insert(agentEvalDatasets)
|
||||
.values({
|
||||
benchmarkId,
|
||||
identifier: 'dataset-2',
|
||||
name: 'Dataset 2',
|
||||
userId,
|
||||
})
|
||||
.returning();
|
||||
|
||||
// Insert runs
|
||||
const [run1, run2, run3, run4] = await serverDB
|
||||
.insert(agentEvalRuns)
|
||||
.values([
|
||||
{
|
||||
datasetId,
|
||||
userId,
|
||||
name: 'Run 1',
|
||||
status: 'idle',
|
||||
},
|
||||
{
|
||||
datasetId,
|
||||
userId,
|
||||
name: 'Run 2',
|
||||
status: 'pending',
|
||||
},
|
||||
{
|
||||
datasetId: dataset2.id,
|
||||
userId,
|
||||
name: 'Run 3',
|
||||
status: 'running',
|
||||
},
|
||||
{
|
||||
datasetId,
|
||||
userId: userId2,
|
||||
name: 'Run 4 - Other User',
|
||||
status: 'completed',
|
||||
},
|
||||
])
|
||||
.returning();
|
||||
});
|
||||
|
||||
it('should query all runs for the user', async () => {
|
||||
const results = await runModel.query();
|
||||
|
||||
expect(results).toHaveLength(3);
|
||||
expect(results.map((r) => r.name)).toContain('Run 1');
|
||||
expect(results.map((r) => r.name)).toContain('Run 2');
|
||||
expect(results.map((r) => r.name)).toContain('Run 3');
|
||||
expect(results.map((r) => r.name)).not.toContain('Run 4 - Other User');
|
||||
});
|
||||
|
||||
it('should filter by datasetId', async () => {
|
||||
const results = await runModel.query({ datasetId });
|
||||
|
||||
expect(results).toHaveLength(2);
|
||||
expect(results.every((r) => r.datasetId === datasetId)).toBe(true);
|
||||
});
|
||||
|
||||
it('should filter by status', async () => {
|
||||
const results = await runModel.query({ status: 'pending' });
|
||||
|
||||
expect(results).toHaveLength(1);
|
||||
expect(results[0].name).toBe('Run 2');
|
||||
expect(results[0].status).toBe('pending');
|
||||
});
|
||||
|
||||
it('should filter by datasetId and status', async () => {
|
||||
const results = await runModel.query({
|
||||
datasetId,
|
||||
status: 'idle',
|
||||
});
|
||||
|
||||
expect(results).toHaveLength(1);
|
||||
expect(results[0].name).toBe('Run 1');
|
||||
});
|
||||
|
||||
it('should apply limit', async () => {
|
||||
const results = await runModel.query({ limit: 2 });
|
||||
|
||||
expect(results).toHaveLength(2);
|
||||
});
|
||||
|
||||
it('should apply offset', async () => {
|
||||
const allResults = await runModel.query();
|
||||
const offsetResults = await runModel.query({ offset: 1 });
|
||||
|
||||
expect(offsetResults).toHaveLength(2);
|
||||
expect(offsetResults[0].id).toBe(allResults[1].id);
|
||||
});
|
||||
|
||||
it('should order by createdAt descending', async () => {
|
||||
const results = await runModel.query();
|
||||
|
||||
// Most recent should be first
|
||||
expect(results.length).toBeGreaterThanOrEqual(3);
|
||||
});
|
||||
});
|
||||
|
||||
describe('findById', () => {
|
||||
it('should find a run by id', async () => {
|
||||
const [run] = await serverDB
|
||||
.insert(agentEvalRuns)
|
||||
.values({
|
||||
datasetId,
|
||||
userId,
|
||||
name: 'Find Test',
|
||||
status: 'idle',
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await runModel.findById(run.id);
|
||||
|
||||
expect(result).toBeDefined();
|
||||
expect(result?.id).toBe(run.id);
|
||||
expect(result?.name).toBe('Find Test');
|
||||
});
|
||||
|
||||
it('should not find a run owned by another user', async () => {
|
||||
const [run] = await serverDB
|
||||
.insert(agentEvalRuns)
|
||||
.values({
|
||||
datasetId,
|
||||
userId: userId2,
|
||||
name: 'Other User Run',
|
||||
status: 'idle',
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await runModel.findById(run.id);
|
||||
|
||||
expect(result).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should return undefined when run not found', async () => {
|
||||
const result = await runModel.findById('non-existent-id');
|
||||
expect(result).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('update', () => {
|
||||
it('should update a run owned by the user', async () => {
|
||||
const [run] = await serverDB
|
||||
.insert(agentEvalRuns)
|
||||
.values({
|
||||
datasetId,
|
||||
userId,
|
||||
name: 'Original Name',
|
||||
status: 'idle',
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await runModel.update(run.id, {
|
||||
name: 'Updated Name',
|
||||
status: 'running',
|
||||
metrics: {
|
||||
totalCases: 10,
|
||||
passedCases: 5,
|
||||
failedCases: 0,
|
||||
averageScore: 0.85,
|
||||
passRate: 0.5,
|
||||
},
|
||||
});
|
||||
|
||||
expect(result).toBeDefined();
|
||||
expect(result?.name).toBe('Updated Name');
|
||||
expect(result?.status).toBe('running');
|
||||
expect(result?.metrics).toMatchObject({
|
||||
totalCases: 10,
|
||||
passedCases: 5,
|
||||
failedCases: 0,
|
||||
averageScore: 0.85,
|
||||
passRate: 0.5,
|
||||
});
|
||||
expect(result?.updatedAt).toBeDefined();
|
||||
expect(result?.updatedAt.getTime()).toBeGreaterThanOrEqual(result!.createdAt.getTime());
|
||||
});
|
||||
|
||||
it('should not update a run owned by another user', async () => {
|
||||
const [run] = await serverDB
|
||||
.insert(agentEvalRuns)
|
||||
.values({
|
||||
datasetId,
|
||||
userId: userId2,
|
||||
name: 'Other User Run',
|
||||
status: 'idle',
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await runModel.update(run.id, {
|
||||
name: 'Attempted Update',
|
||||
});
|
||||
|
||||
expect(result).toBeUndefined();
|
||||
|
||||
const unchanged = await serverDB.query.agentEvalRuns.findFirst({
|
||||
where: eq(agentEvalRuns.id, run.id),
|
||||
});
|
||||
expect(unchanged?.name).toBe('Other User Run');
|
||||
});
|
||||
|
||||
it('should return undefined when run not found', async () => {
|
||||
const result = await runModel.update('non-existent-id', {
|
||||
name: 'New Name',
|
||||
});
|
||||
|
||||
expect(result).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should update only specified fields', async () => {
|
||||
const [run] = await serverDB
|
||||
.insert(agentEvalRuns)
|
||||
.values({
|
||||
datasetId,
|
||||
userId,
|
||||
name: 'Original',
|
||||
status: 'idle',
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await runModel.update(run.id, {
|
||||
status: 'pending',
|
||||
});
|
||||
|
||||
expect(result?.name).toBe('Original');
|
||||
expect(result?.status).toBe('pending');
|
||||
});
|
||||
|
||||
it('should update config', async () => {
|
||||
const [run] = await serverDB
|
||||
.insert(agentEvalRuns)
|
||||
.values({
|
||||
datasetId,
|
||||
userId,
|
||||
status: 'idle',
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await runModel.update(run.id, {
|
||||
config: { concurrency: 10, timeout: 600000 },
|
||||
});
|
||||
|
||||
expect(result?.config).toEqual({ concurrency: 10, timeout: 600000 });
|
||||
});
|
||||
|
||||
it('should update metrics incrementally', async () => {
|
||||
const [run] = await serverDB
|
||||
.insert(agentEvalRuns)
|
||||
.values({
|
||||
datasetId,
|
||||
userId,
|
||||
status: 'running',
|
||||
metrics: {
|
||||
totalCases: 10,
|
||||
passedCases: 0,
|
||||
failedCases: 0,
|
||||
averageScore: 0,
|
||||
passRate: 0,
|
||||
},
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await runModel.update(run.id, {
|
||||
metrics: {
|
||||
totalCases: 10,
|
||||
passedCases: 5,
|
||||
failedCases: 1,
|
||||
averageScore: 0.75,
|
||||
passRate: 0.5,
|
||||
},
|
||||
});
|
||||
|
||||
expect(result?.metrics).toMatchObject({
|
||||
totalCases: 10,
|
||||
passedCases: 5,
|
||||
failedCases: 1,
|
||||
averageScore: 0.75,
|
||||
passRate: 0.5,
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('delete', () => {
|
||||
it('should delete a run owned by the user', async () => {
|
||||
const [run] = await serverDB
|
||||
.insert(agentEvalRuns)
|
||||
.values({
|
||||
datasetId,
|
||||
userId,
|
||||
name: 'Delete Test',
|
||||
status: 'idle',
|
||||
})
|
||||
.returning();
|
||||
|
||||
await runModel.delete(run.id);
|
||||
|
||||
const deleted = await serverDB.query.agentEvalRuns.findFirst({
|
||||
where: eq(agentEvalRuns.id, run.id),
|
||||
});
|
||||
expect(deleted).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should not delete a run owned by another user', async () => {
|
||||
const [run] = await serverDB
|
||||
.insert(agentEvalRuns)
|
||||
.values({
|
||||
datasetId,
|
||||
userId: userId2,
|
||||
name: 'Other User Run',
|
||||
status: 'idle',
|
||||
})
|
||||
.returning();
|
||||
|
||||
await runModel.delete(run.id);
|
||||
|
||||
const stillExists = await serverDB.query.agentEvalRuns.findFirst({
|
||||
where: eq(agentEvalRuns.id, run.id),
|
||||
});
|
||||
expect(stillExists).toBeDefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('countByDatasetId', () => {
|
||||
beforeEach(async () => {
|
||||
// Create another dataset
|
||||
const [dataset2] = await serverDB
|
||||
.insert(agentEvalDatasets)
|
||||
.values({
|
||||
benchmarkId,
|
||||
identifier: 'dataset-2',
|
||||
name: 'Dataset 2',
|
||||
userId,
|
||||
})
|
||||
.returning();
|
||||
|
||||
// Insert runs
|
||||
await serverDB.insert(agentEvalRuns).values([
|
||||
{
|
||||
datasetId,
|
||||
userId,
|
||||
status: 'idle',
|
||||
},
|
||||
{
|
||||
datasetId,
|
||||
userId,
|
||||
status: 'pending',
|
||||
},
|
||||
{
|
||||
datasetId: dataset2.id,
|
||||
userId,
|
||||
status: 'running',
|
||||
},
|
||||
{
|
||||
datasetId,
|
||||
userId: userId2, // Other user's run
|
||||
status: 'completed',
|
||||
},
|
||||
]);
|
||||
});
|
||||
|
||||
it('should count runs for a specific dataset and user', async () => {
|
||||
const count = await runModel.countByDatasetId(datasetId);
|
||||
|
||||
expect(count).toBe(2); // Only user's runs
|
||||
});
|
||||
|
||||
it('should return 0 when no runs exist', async () => {
|
||||
const [emptyDataset] = await serverDB
|
||||
.insert(agentEvalDatasets)
|
||||
.values({
|
||||
benchmarkId,
|
||||
identifier: 'empty-dataset',
|
||||
name: 'Empty Dataset',
|
||||
userId,
|
||||
})
|
||||
.returning();
|
||||
|
||||
const count = await runModel.countByDatasetId(emptyDataset.id);
|
||||
|
||||
expect(count).toBe(0);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
|
@ -0,0 +1,738 @@
|
|||
import { eq, sql } from 'drizzle-orm';
|
||||
import { afterEach, beforeEach, describe, expect, it } from 'vitest';
|
||||
|
||||
import { getTestDB } from '../../../core/getTestDB';
|
||||
import {
|
||||
agentEvalBenchmarks,
|
||||
agentEvalDatasets,
|
||||
agentEvalRuns,
|
||||
agentEvalRunTopics,
|
||||
agentEvalTestCases,
|
||||
topics,
|
||||
users,
|
||||
} from '../../../schemas';
|
||||
import { AgentEvalRunTopicModel } from '../runTopic';
|
||||
|
||||
const serverDB = await getTestDB();
|
||||
|
||||
const userId = 'run-topic-test-user';
|
||||
const runTopicModel = new AgentEvalRunTopicModel(serverDB, userId);
|
||||
|
||||
let benchmarkId: string;
|
||||
let datasetId: string;
|
||||
let runId: string;
|
||||
let testCaseId1: string;
|
||||
let testCaseId2: string;
|
||||
let topicId1: string;
|
||||
let topicId2: string;
|
||||
|
||||
beforeEach(async () => {
|
||||
await serverDB.delete(agentEvalRunTopics);
|
||||
await serverDB.delete(topics);
|
||||
await serverDB.delete(agentEvalRuns);
|
||||
await serverDB.delete(agentEvalTestCases);
|
||||
await serverDB.delete(agentEvalDatasets);
|
||||
await serverDB.delete(agentEvalBenchmarks);
|
||||
await serverDB.delete(users);
|
||||
|
||||
// Create test user
|
||||
await serverDB.insert(users).values({ id: userId });
|
||||
|
||||
// Create test benchmark
|
||||
const [benchmark] = await serverDB
|
||||
.insert(agentEvalBenchmarks)
|
||||
.values({
|
||||
identifier: 'test-benchmark',
|
||||
name: 'Test Benchmark',
|
||||
rubrics: [],
|
||||
isSystem: false,
|
||||
})
|
||||
.returning();
|
||||
benchmarkId = benchmark.id;
|
||||
|
||||
// Create test dataset
|
||||
const [dataset] = await serverDB
|
||||
.insert(agentEvalDatasets)
|
||||
.values({
|
||||
benchmarkId,
|
||||
identifier: 'test-dataset',
|
||||
name: 'Test Dataset',
|
||||
userId,
|
||||
})
|
||||
.returning();
|
||||
datasetId = dataset.id;
|
||||
|
||||
// Create test cases
|
||||
const [testCase1, testCase2] = await serverDB
|
||||
.insert(agentEvalTestCases)
|
||||
.values([
|
||||
{
|
||||
userId,
|
||||
datasetId,
|
||||
content: { input: 'Test question 1' },
|
||||
sortOrder: 1,
|
||||
},
|
||||
{
|
||||
userId,
|
||||
datasetId,
|
||||
content: { input: 'Test question 2' },
|
||||
sortOrder: 2,
|
||||
},
|
||||
])
|
||||
.returning();
|
||||
testCaseId1 = testCase1.id;
|
||||
testCaseId2 = testCase2.id;
|
||||
|
||||
// Create test run
|
||||
const [run] = await serverDB
|
||||
.insert(agentEvalRuns)
|
||||
.values({
|
||||
datasetId,
|
||||
userId,
|
||||
name: 'Test Run',
|
||||
status: 'idle',
|
||||
})
|
||||
.returning();
|
||||
runId = run.id;
|
||||
|
||||
// Create topics
|
||||
const [topic1, topic2] = await serverDB
|
||||
.insert(topics)
|
||||
.values([
|
||||
{
|
||||
userId,
|
||||
title: 'Topic 1',
|
||||
trigger: 'eval',
|
||||
mode: 'test',
|
||||
},
|
||||
{
|
||||
userId,
|
||||
title: 'Topic 2',
|
||||
trigger: 'eval',
|
||||
mode: 'test',
|
||||
},
|
||||
])
|
||||
.returning();
|
||||
topicId1 = topic1.id;
|
||||
topicId2 = topic2.id;
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
await serverDB.delete(agentEvalRunTopics);
|
||||
await serverDB.delete(topics);
|
||||
await serverDB.delete(agentEvalRuns);
|
||||
await serverDB.delete(agentEvalTestCases);
|
||||
await serverDB.delete(agentEvalDatasets);
|
||||
await serverDB.delete(agentEvalBenchmarks);
|
||||
await serverDB.delete(users);
|
||||
});
|
||||
|
||||
describe('AgentEvalRunTopicModel', () => {
|
||||
describe('batchCreate', () => {
|
||||
it('should create multiple run topics', async () => {
|
||||
const params = [
|
||||
{
|
||||
runId,
|
||||
topicId: topicId1,
|
||||
testCaseId: testCaseId1,
|
||||
},
|
||||
{
|
||||
runId,
|
||||
topicId: topicId2,
|
||||
testCaseId: testCaseId2,
|
||||
},
|
||||
];
|
||||
|
||||
const results = await runTopicModel.batchCreate(params);
|
||||
|
||||
expect(results).toHaveLength(2);
|
||||
expect(results[0].runId).toBe(runId);
|
||||
expect(results[0].topicId).toBe(topicId1);
|
||||
expect(results[0].testCaseId).toBe(testCaseId1);
|
||||
expect(results[0].createdAt).toBeDefined();
|
||||
|
||||
expect(results[1].runId).toBe(runId);
|
||||
expect(results[1].topicId).toBe(topicId2);
|
||||
expect(results[1].testCaseId).toBe(testCaseId2);
|
||||
});
|
||||
|
||||
it('should handle empty array', async () => {
|
||||
const results = await runTopicModel.batchCreate([]);
|
||||
|
||||
expect(results).toHaveLength(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('findByRunId', () => {
|
||||
beforeEach(async () => {
|
||||
await serverDB.insert(agentEvalRunTopics).values([
|
||||
{
|
||||
userId,
|
||||
runId,
|
||||
topicId: topicId1,
|
||||
testCaseId: testCaseId1,
|
||||
},
|
||||
{
|
||||
userId,
|
||||
runId,
|
||||
topicId: topicId2,
|
||||
testCaseId: testCaseId2,
|
||||
},
|
||||
]);
|
||||
});
|
||||
|
||||
it('should find run topics with relations', async () => {
|
||||
const results = await runTopicModel.findByRunId(runId);
|
||||
|
||||
expect(results).toHaveLength(2);
|
||||
expect(results[0].runId).toBe(runId);
|
||||
expect(results[0].status).toBeNull();
|
||||
expect(results[0].topic).toBeDefined();
|
||||
expect((results[0].topic as any).id).toBe(topicId1);
|
||||
expect((results[0].topic as any).title).toBe('Topic 1');
|
||||
expect(results[0].testCase).toBeDefined();
|
||||
expect((results[0].testCase as any).id).toBe(testCaseId1);
|
||||
});
|
||||
|
||||
it('should return status field after update', async () => {
|
||||
await runTopicModel.updateByRunAndTopic(runId, topicId1, { status: 'passed' });
|
||||
await runTopicModel.updateByRunAndTopic(runId, topicId2, { status: 'error' });
|
||||
|
||||
const results = await runTopicModel.findByRunId(runId);
|
||||
|
||||
expect(results[0].status).toBe('passed');
|
||||
expect(results[1].status).toBe('error');
|
||||
});
|
||||
|
||||
it('should order by createdAt ascending', async () => {
|
||||
const results = await runTopicModel.findByRunId(runId);
|
||||
|
||||
expect(results.length).toBe(2);
|
||||
// First created should be first
|
||||
expect(results[0].topicId).toBe(topicId1);
|
||||
expect(results[1].topicId).toBe(topicId2);
|
||||
});
|
||||
|
||||
it('should return empty array when no topics exist', async () => {
|
||||
const [emptyRun] = await serverDB
|
||||
.insert(agentEvalRuns)
|
||||
.values({
|
||||
datasetId,
|
||||
userId,
|
||||
status: 'idle',
|
||||
})
|
||||
.returning();
|
||||
|
||||
const results = await runTopicModel.findByRunId(emptyRun.id);
|
||||
|
||||
expect(results).toHaveLength(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('deleteByRunId', () => {
|
||||
beforeEach(async () => {
|
||||
await serverDB.insert(agentEvalRunTopics).values([
|
||||
{
|
||||
userId,
|
||||
runId,
|
||||
topicId: topicId1,
|
||||
testCaseId: testCaseId1,
|
||||
},
|
||||
{
|
||||
userId,
|
||||
runId,
|
||||
topicId: topicId2,
|
||||
testCaseId: testCaseId2,
|
||||
},
|
||||
]);
|
||||
});
|
||||
|
||||
it('should delete all topics for a run', async () => {
|
||||
await runTopicModel.deleteByRunId(runId);
|
||||
|
||||
const remaining = await serverDB.query.agentEvalRunTopics.findMany({
|
||||
where: eq(agentEvalRunTopics.runId, runId),
|
||||
});
|
||||
|
||||
expect(remaining).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should not affect other runs', async () => {
|
||||
// Create another run with topics
|
||||
const [otherRun] = await serverDB
|
||||
.insert(agentEvalRuns)
|
||||
.values({
|
||||
datasetId,
|
||||
userId,
|
||||
status: 'idle',
|
||||
})
|
||||
.returning();
|
||||
|
||||
const [otherTopic] = await serverDB
|
||||
.insert(topics)
|
||||
.values({
|
||||
userId,
|
||||
title: 'Other Topic',
|
||||
trigger: 'eval',
|
||||
})
|
||||
.returning();
|
||||
|
||||
await serverDB.insert(agentEvalRunTopics).values({
|
||||
userId,
|
||||
runId: otherRun.id,
|
||||
topicId: otherTopic.id,
|
||||
testCaseId: testCaseId1,
|
||||
});
|
||||
|
||||
await runTopicModel.deleteByRunId(runId);
|
||||
|
||||
const otherRunTopics = await serverDB.query.agentEvalRunTopics.findMany({
|
||||
where: eq(agentEvalRunTopics.runId, otherRun.id),
|
||||
});
|
||||
|
||||
expect(otherRunTopics).toHaveLength(1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('findByTestCaseId', () => {
|
||||
beforeEach(async () => {
|
||||
await serverDB.insert(agentEvalRunTopics).values([
|
||||
{
|
||||
userId,
|
||||
runId,
|
||||
topicId: topicId1,
|
||||
testCaseId: testCaseId1,
|
||||
},
|
||||
{
|
||||
userId,
|
||||
runId,
|
||||
topicId: topicId2,
|
||||
testCaseId: testCaseId2,
|
||||
},
|
||||
]);
|
||||
});
|
||||
|
||||
it('should find topics by test case id', async () => {
|
||||
const results = await runTopicModel.findByTestCaseId(testCaseId1);
|
||||
|
||||
expect(results).toHaveLength(1);
|
||||
expect(results[0].testCaseId).toBe(testCaseId1);
|
||||
expect(results[0].topicId).toBe(topicId1);
|
||||
});
|
||||
|
||||
it('should return empty array when no topics exist for test case', async () => {
|
||||
const [newTestCase] = await serverDB
|
||||
.insert(agentEvalTestCases)
|
||||
.values({
|
||||
userId,
|
||||
datasetId,
|
||||
content: { input: 'Unused test case' },
|
||||
sortOrder: 3,
|
||||
})
|
||||
.returning();
|
||||
|
||||
const results = await runTopicModel.findByTestCaseId(newTestCase.id);
|
||||
|
||||
expect(results).toHaveLength(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('findByRunAndTestCase', () => {
|
||||
beforeEach(async () => {
|
||||
await serverDB.insert(agentEvalRunTopics).values([
|
||||
{
|
||||
userId,
|
||||
runId,
|
||||
topicId: topicId1,
|
||||
testCaseId: testCaseId1,
|
||||
},
|
||||
{
|
||||
userId,
|
||||
runId,
|
||||
topicId: topicId2,
|
||||
testCaseId: testCaseId2,
|
||||
},
|
||||
]);
|
||||
});
|
||||
|
||||
it('should find specific run-testcase combination', async () => {
|
||||
const result = await runTopicModel.findByRunAndTestCase(runId, testCaseId1);
|
||||
|
||||
expect(result).toBeDefined();
|
||||
expect(result?.runId).toBe(runId);
|
||||
expect(result?.testCaseId).toBe(testCaseId1);
|
||||
expect(result?.topicId).toBe(topicId1);
|
||||
expect(result?.status).toBeNull();
|
||||
});
|
||||
|
||||
it('should return status field after update', async () => {
|
||||
await runTopicModel.updateByRunAndTopic(runId, topicId1, { status: 'failed' });
|
||||
|
||||
const result = await runTopicModel.findByRunAndTestCase(runId, testCaseId1);
|
||||
|
||||
expect(result?.status).toBe('failed');
|
||||
});
|
||||
|
||||
it('should return undefined when combination not found', async () => {
|
||||
const [otherRun] = await serverDB
|
||||
.insert(agentEvalRuns)
|
||||
.values({
|
||||
datasetId,
|
||||
userId,
|
||||
status: 'idle',
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await runTopicModel.findByRunAndTestCase(otherRun.id, testCaseId1);
|
||||
|
||||
expect(result).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('updateByRunAndTopic', () => {
|
||||
beforeEach(async () => {
|
||||
await serverDB.insert(agentEvalRunTopics).values({
|
||||
userId,
|
||||
runId,
|
||||
topicId: topicId1,
|
||||
testCaseId: testCaseId1,
|
||||
});
|
||||
});
|
||||
|
||||
it('should update score and passed fields', async () => {
|
||||
const result = await runTopicModel.updateByRunAndTopic(runId, topicId1, {
|
||||
score: 0.85,
|
||||
passed: true,
|
||||
evalResult: {
|
||||
rubricScores: [{ rubricId: 'r1', score: 0.85 }],
|
||||
},
|
||||
});
|
||||
|
||||
expect(result.score).toBe(0.85);
|
||||
expect(result.passed).toBe(true);
|
||||
expect(result.evalResult).toEqual({
|
||||
rubricScores: [{ rubricId: 'r1', score: 0.85 }],
|
||||
});
|
||||
});
|
||||
|
||||
it('should update only specified fields', async () => {
|
||||
await runTopicModel.updateByRunAndTopic(runId, topicId1, {
|
||||
score: 0,
|
||||
passed: false,
|
||||
});
|
||||
|
||||
const updated = await serverDB.query.agentEvalRunTopics.findFirst({
|
||||
where: eq(agentEvalRunTopics.topicId, topicId1),
|
||||
});
|
||||
|
||||
expect(updated?.score).toBe(0);
|
||||
expect(updated?.passed).toBe(false);
|
||||
expect(updated?.evalResult).toBeNull();
|
||||
});
|
||||
|
||||
it('should update status field', async () => {
|
||||
const result = await runTopicModel.updateByRunAndTopic(runId, topicId1, {
|
||||
status: 'passed',
|
||||
score: 1,
|
||||
passed: true,
|
||||
});
|
||||
|
||||
expect(result.status).toBe('passed');
|
||||
expect(result.score).toBe(1);
|
||||
expect(result.passed).toBe(true);
|
||||
});
|
||||
|
||||
it('should update status to error with evalResult', async () => {
|
||||
const result = await runTopicModel.updateByRunAndTopic(runId, topicId1, {
|
||||
status: 'error',
|
||||
score: 0,
|
||||
passed: false,
|
||||
evalResult: {
|
||||
error: 'Execution error: insufficient_user_quota',
|
||||
rubricScores: [],
|
||||
},
|
||||
});
|
||||
|
||||
expect(result.status).toBe('error');
|
||||
expect(result.passed).toBe(false);
|
||||
expect(result.evalResult).toMatchObject({
|
||||
error: 'Execution error: insufficient_user_quota',
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('batchMarkTimeout', () => {
|
||||
it('should mark old running topics as timeout, leave recent ones alone', async () => {
|
||||
// Create 3 topics
|
||||
const [topic3] = await serverDB
|
||||
.insert(topics)
|
||||
.values({ userId, title: 'Topic 3', trigger: 'eval', mode: 'test' })
|
||||
.returning();
|
||||
|
||||
await serverDB.insert(agentEvalRunTopics).values([
|
||||
{ userId, runId, topicId: topicId1, testCaseId: testCaseId1, status: 'running' },
|
||||
{ userId, runId, topicId: topicId2, testCaseId: testCaseId2, status: 'running' },
|
||||
{ userId, runId, topicId: topic3.id, testCaseId: testCaseId1, status: 'running' },
|
||||
]);
|
||||
|
||||
// Backdate topic1 to 30 min ago, topic2 to 25 min ago, leave topic3 recent
|
||||
await serverDB
|
||||
.update(agentEvalRunTopics)
|
||||
.set({ createdAt: sql`NOW() - interval '30 minutes'` })
|
||||
.where(eq(agentEvalRunTopics.topicId, topicId1));
|
||||
await serverDB
|
||||
.update(agentEvalRunTopics)
|
||||
.set({ createdAt: sql`NOW() - interval '25 minutes'` })
|
||||
.where(eq(agentEvalRunTopics.topicId, topicId2));
|
||||
|
||||
// Timeout = 20 min (1_200_000 ms)
|
||||
const rows = await runTopicModel.batchMarkTimeout(runId, 1_200_000);
|
||||
|
||||
expect(rows).toHaveLength(2); // topic1 (30min) and topic2 (25min) > 20min
|
||||
|
||||
const all = await serverDB.query.agentEvalRunTopics.findMany({
|
||||
where: eq(agentEvalRunTopics.runId, runId),
|
||||
});
|
||||
|
||||
const statusMap = Object.fromEntries(all.map((r) => [r.topicId, r.status]));
|
||||
expect(statusMap[topicId1]).toBe('timeout');
|
||||
expect(statusMap[topicId2]).toBe('timeout');
|
||||
expect(statusMap[topic3.id]).toBe('running'); // recent, not timed out
|
||||
});
|
||||
|
||||
it('should not touch topics already in terminal state', async () => {
|
||||
await serverDB.insert(agentEvalRunTopics).values([
|
||||
{ userId, runId, topicId: topicId1, testCaseId: testCaseId1, status: 'passed' },
|
||||
{ userId, runId, topicId: topicId2, testCaseId: testCaseId2, status: 'running' },
|
||||
]);
|
||||
|
||||
// Backdate both to 30 min ago
|
||||
await serverDB
|
||||
.update(agentEvalRunTopics)
|
||||
.set({ createdAt: sql`NOW() - interval '30 minutes'` })
|
||||
.where(eq(agentEvalRunTopics.runId, runId));
|
||||
|
||||
const rows = await runTopicModel.batchMarkTimeout(runId, 1_200_000);
|
||||
|
||||
expect(rows).toHaveLength(1); // only topic2 (running), not topic1 (passed)
|
||||
|
||||
const all = await serverDB.query.agentEvalRunTopics.findMany({
|
||||
where: eq(agentEvalRunTopics.runId, runId),
|
||||
});
|
||||
const statusMap = Object.fromEntries(all.map((r) => [r.topicId, r.status]));
|
||||
expect(statusMap[topicId1]).toBe('passed');
|
||||
expect(statusMap[topicId2]).toBe('timeout');
|
||||
});
|
||||
|
||||
it('should only target running status, not null or pending', async () => {
|
||||
const [topic3] = await serverDB
|
||||
.insert(topics)
|
||||
.values({ userId, title: 'Topic 3', trigger: 'eval', mode: 'test' })
|
||||
.returning();
|
||||
|
||||
await serverDB.insert(agentEvalRunTopics).values([
|
||||
{ userId, runId, topicId: topicId1, testCaseId: testCaseId1 }, // null status
|
||||
{ userId, runId, topicId: topicId2, testCaseId: testCaseId2, status: 'pending' },
|
||||
{ userId, runId, topicId: topic3.id, testCaseId: testCaseId1, status: 'running' },
|
||||
]);
|
||||
|
||||
// Backdate all to 30 min ago
|
||||
await serverDB
|
||||
.update(agentEvalRunTopics)
|
||||
.set({ createdAt: sql`NOW() - interval '30 minutes'` })
|
||||
.where(eq(agentEvalRunTopics.runId, runId));
|
||||
|
||||
const rows = await runTopicModel.batchMarkTimeout(runId, 1_200_000);
|
||||
|
||||
// Only the running topic should be marked
|
||||
expect(rows).toHaveLength(1);
|
||||
|
||||
const all = await serverDB.query.agentEvalRunTopics.findMany({
|
||||
where: eq(agentEvalRunTopics.runId, runId),
|
||||
});
|
||||
const statusMap = Object.fromEntries(all.map((r) => [r.topicId, r.status]));
|
||||
expect(statusMap[topicId1]).toBeNull(); // unchanged
|
||||
expect(statusMap[topicId2]).toBe('pending'); // unchanged
|
||||
expect(statusMap[topic3.id]).toBe('timeout'); // timed out
|
||||
});
|
||||
|
||||
it('should return 0 when no topics need timeout', async () => {
|
||||
// All topics are recent (just created)
|
||||
await serverDB.insert(agentEvalRunTopics).values([
|
||||
{ userId, runId, topicId: topicId1, testCaseId: testCaseId1, status: 'running' },
|
||||
{ userId, runId, topicId: topicId2, testCaseId: testCaseId2, status: 'running' },
|
||||
]);
|
||||
|
||||
const rows = await runTopicModel.batchMarkTimeout(runId, 1_200_000);
|
||||
|
||||
expect(rows).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should not affect topics from other runs', async () => {
|
||||
const [otherRun] = await serverDB
|
||||
.insert(agentEvalRuns)
|
||||
.values({ datasetId, userId, status: 'running' })
|
||||
.returning();
|
||||
const [otherTopic] = await serverDB
|
||||
.insert(topics)
|
||||
.values({ userId, title: 'Other', trigger: 'eval' })
|
||||
.returning();
|
||||
|
||||
await serverDB.insert(agentEvalRunTopics).values([
|
||||
{ userId, runId, topicId: topicId1, testCaseId: testCaseId1, status: 'running' },
|
||||
{
|
||||
userId,
|
||||
runId: otherRun.id,
|
||||
topicId: otherTopic.id,
|
||||
testCaseId: testCaseId1,
|
||||
status: 'running',
|
||||
},
|
||||
]);
|
||||
|
||||
// Backdate both
|
||||
await serverDB
|
||||
.update(agentEvalRunTopics)
|
||||
.set({ createdAt: sql`NOW() - interval '30 minutes'` });
|
||||
|
||||
const rows = await runTopicModel.batchMarkTimeout(runId, 1_200_000);
|
||||
|
||||
expect(rows).toHaveLength(1);
|
||||
|
||||
// Other run's topic should still be running
|
||||
const [otherRow] = await serverDB.query.agentEvalRunTopics.findMany({
|
||||
where: eq(agentEvalRunTopics.topicId, otherTopic.id),
|
||||
});
|
||||
expect(otherRow.status).toBe('running');
|
||||
});
|
||||
});
|
||||
|
||||
describe('deleteErrorRunTopics', () => {
|
||||
it('should delete only error and timeout RunTopics', async () => {
|
||||
await serverDB.insert(agentEvalRunTopics).values([
|
||||
{ userId, runId, topicId: topicId1, testCaseId: testCaseId1, status: 'passed' },
|
||||
{ userId, runId, topicId: topicId2, testCaseId: testCaseId2, status: 'error' },
|
||||
]);
|
||||
|
||||
const deleted = await runTopicModel.deleteErrorRunTopics(runId);
|
||||
|
||||
expect(deleted).toHaveLength(1);
|
||||
expect(deleted[0].topicId).toBe(topicId2);
|
||||
|
||||
const remaining = await serverDB.query.agentEvalRunTopics.findMany({
|
||||
where: eq(agentEvalRunTopics.runId, runId),
|
||||
});
|
||||
expect(remaining).toHaveLength(1);
|
||||
expect(remaining[0].status).toBe('passed');
|
||||
});
|
||||
|
||||
it('should delete both error and timeout statuses', async () => {
|
||||
const [topic3] = await serverDB
|
||||
.insert(topics)
|
||||
.values({ userId, title: 'Topic 3', trigger: 'eval', mode: 'test' })
|
||||
.returning();
|
||||
const [testCase3] = await serverDB
|
||||
.insert(agentEvalTestCases)
|
||||
.values({ userId, datasetId, content: { input: 'Q3' }, sortOrder: 3 })
|
||||
.returning();
|
||||
|
||||
await serverDB.insert(agentEvalRunTopics).values([
|
||||
{ userId, runId, topicId: topicId1, testCaseId: testCaseId1, status: 'error' },
|
||||
{ userId, runId, topicId: topicId2, testCaseId: testCaseId2, status: 'timeout' },
|
||||
{ userId, runId, topicId: topic3.id, testCaseId: testCase3.id, status: 'failed' },
|
||||
]);
|
||||
|
||||
const deleted = await runTopicModel.deleteErrorRunTopics(runId);
|
||||
|
||||
expect(deleted).toHaveLength(2);
|
||||
|
||||
const remaining = await serverDB.query.agentEvalRunTopics.findMany({
|
||||
where: eq(agentEvalRunTopics.runId, runId),
|
||||
});
|
||||
expect(remaining).toHaveLength(1);
|
||||
expect(remaining[0].status).toBe('failed');
|
||||
});
|
||||
|
||||
it('should return empty array when no error/timeout topics exist', async () => {
|
||||
await serverDB.insert(agentEvalRunTopics).values([
|
||||
{ userId, runId, topicId: topicId1, testCaseId: testCaseId1, status: 'passed' },
|
||||
{ userId, runId, topicId: topicId2, testCaseId: testCaseId2, status: 'failed' },
|
||||
]);
|
||||
|
||||
const deleted = await runTopicModel.deleteErrorRunTopics(runId);
|
||||
|
||||
expect(deleted).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should not affect other runs', async () => {
|
||||
const [otherRun] = await serverDB
|
||||
.insert(agentEvalRuns)
|
||||
.values({ datasetId, userId, status: 'completed' })
|
||||
.returning();
|
||||
const [otherTopic] = await serverDB
|
||||
.insert(topics)
|
||||
.values({ userId, title: 'Other', trigger: 'eval' })
|
||||
.returning();
|
||||
|
||||
await serverDB.insert(agentEvalRunTopics).values([
|
||||
{ userId, runId, topicId: topicId1, testCaseId: testCaseId1, status: 'error' },
|
||||
{
|
||||
userId,
|
||||
runId: otherRun.id,
|
||||
topicId: otherTopic.id,
|
||||
testCaseId: testCaseId1,
|
||||
status: 'error',
|
||||
},
|
||||
]);
|
||||
|
||||
await runTopicModel.deleteErrorRunTopics(runId);
|
||||
|
||||
// Other run's error topic should still exist
|
||||
const otherRunTopics = await serverDB.query.agentEvalRunTopics.findMany({
|
||||
where: eq(agentEvalRunTopics.runId, otherRun.id),
|
||||
});
|
||||
expect(otherRunTopics).toHaveLength(1);
|
||||
expect(otherRunTopics[0].status).toBe('error');
|
||||
});
|
||||
});
|
||||
|
||||
describe('cascade deletion', () => {
|
||||
beforeEach(async () => {
|
||||
await serverDB.insert(agentEvalRunTopics).values({
|
||||
userId,
|
||||
runId,
|
||||
topicId: topicId1,
|
||||
testCaseId: testCaseId1,
|
||||
});
|
||||
});
|
||||
|
||||
it('should cascade delete when run is deleted', async () => {
|
||||
await serverDB.delete(agentEvalRuns).where(eq(agentEvalRuns.id, runId));
|
||||
|
||||
const remaining = await serverDB.query.agentEvalRunTopics.findMany({
|
||||
where: eq(agentEvalRunTopics.runId, runId),
|
||||
});
|
||||
|
||||
expect(remaining).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should cascade delete when topic is deleted', async () => {
|
||||
await serverDB.delete(topics).where(eq(topics.id, topicId1));
|
||||
|
||||
const remaining = await serverDB.query.agentEvalRunTopics.findMany({
|
||||
where: eq(agentEvalRunTopics.topicId, topicId1),
|
||||
});
|
||||
|
||||
expect(remaining).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should cascade delete when test case is deleted', async () => {
|
||||
await serverDB.delete(agentEvalTestCases).where(eq(agentEvalTestCases.id, testCaseId1));
|
||||
|
||||
const remaining = await serverDB.query.agentEvalRunTopics.findMany({
|
||||
where: eq(agentEvalRunTopics.testCaseId, testCaseId1),
|
||||
});
|
||||
|
||||
expect(remaining).toHaveLength(0);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
|
@ -0,0 +1,535 @@
|
|||
import { eq } from 'drizzle-orm';
|
||||
import { afterEach, beforeEach, describe, expect, it } from 'vitest';
|
||||
|
||||
import { getTestDB } from '../../../core/getTestDB';
|
||||
import {
|
||||
agentEvalBenchmarks,
|
||||
agentEvalDatasets,
|
||||
agentEvalTestCases,
|
||||
users,
|
||||
} from '../../../schemas';
|
||||
import { AgentEvalTestCaseModel } from '../testCase';
|
||||
|
||||
const serverDB = await getTestDB();
|
||||
|
||||
const userId = 'testcase-test-user';
|
||||
const testCaseModel = new AgentEvalTestCaseModel(serverDB, userId);
|
||||
|
||||
let datasetId: string;
|
||||
|
||||
beforeEach(async () => {
|
||||
await serverDB.delete(agentEvalTestCases);
|
||||
await serverDB.delete(agentEvalDatasets);
|
||||
await serverDB.delete(agentEvalBenchmarks);
|
||||
await serverDB.delete(users);
|
||||
|
||||
// Create test user
|
||||
await serverDB.insert(users).values({ id: userId });
|
||||
|
||||
// Create a test benchmark
|
||||
const [benchmark] = await serverDB
|
||||
.insert(agentEvalBenchmarks)
|
||||
.values({
|
||||
identifier: 'test-benchmark',
|
||||
name: 'Test Benchmark',
|
||||
rubrics: [],
|
||||
isSystem: false,
|
||||
})
|
||||
.returning();
|
||||
|
||||
// Create a test dataset
|
||||
const [dataset] = await serverDB
|
||||
.insert(agentEvalDatasets)
|
||||
.values({
|
||||
benchmarkId: benchmark.id,
|
||||
identifier: 'test-dataset',
|
||||
name: 'Test Dataset',
|
||||
userId,
|
||||
})
|
||||
.returning();
|
||||
datasetId = dataset.id;
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
await serverDB.delete(agentEvalTestCases);
|
||||
await serverDB.delete(agentEvalDatasets);
|
||||
await serverDB.delete(agentEvalBenchmarks);
|
||||
await serverDB.delete(users);
|
||||
});
|
||||
|
||||
describe('AgentEvalTestCaseModel', () => {
|
||||
describe('create', () => {
|
||||
it('should create a new test case', async () => {
|
||||
const params = {
|
||||
datasetId,
|
||||
content: {
|
||||
input: 'What is AI?',
|
||||
expected: 'Artificial Intelligence...',
|
||||
context: { difficulty: 'easy' },
|
||||
},
|
||||
metadata: { source: 'manual' },
|
||||
sortOrder: 1,
|
||||
};
|
||||
|
||||
const result = await testCaseModel.create(params);
|
||||
|
||||
expect(result).toBeDefined();
|
||||
expect(result.datasetId).toBe(datasetId);
|
||||
expect(result.content).toEqual({
|
||||
input: 'What is AI?',
|
||||
expected: 'Artificial Intelligence...',
|
||||
context: { difficulty: 'easy' },
|
||||
});
|
||||
expect(result.metadata).toEqual({ source: 'manual' });
|
||||
expect(result.sortOrder).toBe(1);
|
||||
expect(result.createdAt).toBeDefined();
|
||||
expect(result.updatedAt).toBeDefined();
|
||||
});
|
||||
|
||||
it('should create a test case with minimal parameters', async () => {
|
||||
const params = {
|
||||
datasetId,
|
||||
content: {
|
||||
input: 'Minimal test',
|
||||
},
|
||||
};
|
||||
|
||||
const result = await testCaseModel.create(params);
|
||||
|
||||
expect(result).toBeDefined();
|
||||
expect(result.content.input).toBe('Minimal test');
|
||||
expect(result.content.expected).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should auto-assign sortOrder starting from 1 when not provided', async () => {
|
||||
const r1 = await testCaseModel.create({ datasetId, content: { input: 'Q1' } });
|
||||
const r2 = await testCaseModel.create({ datasetId, content: { input: 'Q2' } });
|
||||
const r3 = await testCaseModel.create({ datasetId, content: { input: 'Q3' } });
|
||||
|
||||
expect(r1.sortOrder).toBe(1);
|
||||
expect(r2.sortOrder).toBe(2);
|
||||
expect(r3.sortOrder).toBe(3);
|
||||
});
|
||||
|
||||
it('should continue sortOrder from existing max when auto-assigning', async () => {
|
||||
await testCaseModel.create({ datasetId, content: { input: 'Q1' }, sortOrder: 5 });
|
||||
|
||||
const r2 = await testCaseModel.create({ datasetId, content: { input: 'Q2' } });
|
||||
|
||||
expect(r2.sortOrder).toBe(6);
|
||||
});
|
||||
|
||||
it('should continue sortOrder after gaps (e.g. 1, 3, 10 → next is 11)', async () => {
|
||||
await testCaseModel.create({ datasetId, content: { input: 'Q1' }, sortOrder: 1 });
|
||||
await testCaseModel.create({ datasetId, content: { input: 'Q2' }, sortOrder: 3 });
|
||||
await testCaseModel.create({ datasetId, content: { input: 'Q3' }, sortOrder: 10 });
|
||||
|
||||
const r4 = await testCaseModel.create({ datasetId, content: { input: 'Q4' } });
|
||||
|
||||
expect(r4.sortOrder).toBe(11);
|
||||
});
|
||||
|
||||
it('should continue sortOrder after middle items deleted', async () => {
|
||||
const r1 = await testCaseModel.create({ datasetId, content: { input: 'Q1' } });
|
||||
const r2 = await testCaseModel.create({ datasetId, content: { input: 'Q2' } });
|
||||
await testCaseModel.create({ datasetId, content: { input: 'Q3' } });
|
||||
|
||||
// Delete middle item
|
||||
await testCaseModel.delete(r2.id);
|
||||
|
||||
// New item should still be max+1 = 4, not fill the gap
|
||||
const r4 = await testCaseModel.create({ datasetId, content: { input: 'Q4' } });
|
||||
expect(r4.sortOrder).toBe(4);
|
||||
});
|
||||
|
||||
it('should mix explicit and auto sortOrder correctly', async () => {
|
||||
const r1 = await testCaseModel.create({ datasetId, content: { input: 'Q1' }, sortOrder: 3 });
|
||||
const r2 = await testCaseModel.create({ datasetId, content: { input: 'Q2' } }); // auto: 4
|
||||
const r3 = await testCaseModel.create({
|
||||
datasetId,
|
||||
content: { input: 'Q3' },
|
||||
sortOrder: 100,
|
||||
});
|
||||
const r4 = await testCaseModel.create({ datasetId, content: { input: 'Q4' } }); // auto: 101
|
||||
|
||||
expect(r1.sortOrder).toBe(3);
|
||||
expect(r2.sortOrder).toBe(4);
|
||||
expect(r3.sortOrder).toBe(100);
|
||||
expect(r4.sortOrder).toBe(101);
|
||||
});
|
||||
});
|
||||
|
||||
describe('batchCreate', () => {
|
||||
it('should create multiple test cases', async () => {
|
||||
const cases = [
|
||||
{
|
||||
datasetId,
|
||||
content: { input: 'Test 1' },
|
||||
sortOrder: 1,
|
||||
},
|
||||
{
|
||||
datasetId,
|
||||
content: { input: 'Test 2', expected: 'Answer 2' },
|
||||
sortOrder: 2,
|
||||
},
|
||||
{
|
||||
datasetId,
|
||||
content: { input: 'Test 3' },
|
||||
metadata: { reviewed: true },
|
||||
sortOrder: 3,
|
||||
},
|
||||
];
|
||||
|
||||
const results = await testCaseModel.batchCreate(cases);
|
||||
|
||||
expect(results).toHaveLength(3);
|
||||
expect(results[0].content.input).toBe('Test 1');
|
||||
expect(results[1].content.expected).toBe('Answer 2');
|
||||
expect(results[2].metadata).toEqual({ reviewed: true });
|
||||
});
|
||||
|
||||
it('should auto-inject userId from model', async () => {
|
||||
const results = await testCaseModel.batchCreate([
|
||||
{ datasetId, content: { input: 'Q1' }, sortOrder: 1 },
|
||||
]);
|
||||
|
||||
expect(results[0].userId).toBe(userId);
|
||||
});
|
||||
|
||||
it('should handle second batch import after first batch (simulating CSV import)', async () => {
|
||||
// First import: 3 items
|
||||
const batch1 = await testCaseModel.batchCreate([
|
||||
{ datasetId, content: { input: 'Q1' }, sortOrder: 1 },
|
||||
{ datasetId, content: { input: 'Q2' }, sortOrder: 2 },
|
||||
{ datasetId, content: { input: 'Q3' }, sortOrder: 3 },
|
||||
]);
|
||||
expect(batch1).toHaveLength(3);
|
||||
|
||||
// Simulate how the router computes sortOrder for second import:
|
||||
// existingCount=3, so new items get 3+0+1=4, 3+1+1=5, 3+2+1=6
|
||||
const existingCount = await testCaseModel.countByDatasetId(datasetId);
|
||||
expect(existingCount).toBe(3);
|
||||
|
||||
const batch2 = await testCaseModel.batchCreate([
|
||||
{ datasetId, content: { input: 'Q4' }, sortOrder: existingCount + 1 },
|
||||
{ datasetId, content: { input: 'Q5' }, sortOrder: existingCount + 2 },
|
||||
]);
|
||||
|
||||
expect(batch2[0].sortOrder).toBe(4);
|
||||
expect(batch2[1].sortOrder).toBe(5);
|
||||
|
||||
// Verify total order via findByDatasetId
|
||||
const all = await testCaseModel.findByDatasetId(datasetId);
|
||||
expect(all).toHaveLength(5);
|
||||
expect(all.map((r) => r.sortOrder)).toEqual([1, 2, 3, 4, 5]);
|
||||
expect(all.map((r) => r.content.input)).toEqual(['Q1', 'Q2', 'Q3', 'Q4', 'Q5']);
|
||||
});
|
||||
|
||||
it('should handle batch import after single creates', async () => {
|
||||
// Create via single create (auto sortOrder)
|
||||
await testCaseModel.create({ datasetId, content: { input: 'Q1' } }); // sortOrder=1
|
||||
await testCaseModel.create({ datasetId, content: { input: 'Q2' } }); // sortOrder=2
|
||||
|
||||
// Now simulate CSV import
|
||||
const existingCount = await testCaseModel.countByDatasetId(datasetId);
|
||||
expect(existingCount).toBe(2);
|
||||
|
||||
const batch = await testCaseModel.batchCreate([
|
||||
{ datasetId, content: { input: 'Q3' }, sortOrder: existingCount + 1 },
|
||||
{ datasetId, content: { input: 'Q4' }, sortOrder: existingCount + 2 },
|
||||
{ datasetId, content: { input: 'Q5' }, sortOrder: existingCount + 3 },
|
||||
]);
|
||||
|
||||
const all = await testCaseModel.findByDatasetId(datasetId);
|
||||
expect(all).toHaveLength(5);
|
||||
expect(all.map((r) => r.sortOrder)).toEqual([1, 2, 3, 4, 5]);
|
||||
});
|
||||
|
||||
it('should handle batch import after deleting some items', async () => {
|
||||
// Create 5 items
|
||||
const batch1 = await testCaseModel.batchCreate([
|
||||
{ datasetId, content: { input: 'Q1' }, sortOrder: 1 },
|
||||
{ datasetId, content: { input: 'Q2' }, sortOrder: 2 },
|
||||
{ datasetId, content: { input: 'Q3' }, sortOrder: 3 },
|
||||
{ datasetId, content: { input: 'Q4' }, sortOrder: 4 },
|
||||
{ datasetId, content: { input: 'Q5' }, sortOrder: 5 },
|
||||
]);
|
||||
|
||||
// Delete Q2 and Q4 — remaining: Q1(1), Q3(3), Q5(5)
|
||||
await testCaseModel.delete(batch1[1].id);
|
||||
await testCaseModel.delete(batch1[3].id);
|
||||
|
||||
// Import new items — existingCount=3, so sortOrder starts at 4
|
||||
const existingCount = await testCaseModel.countByDatasetId(datasetId);
|
||||
expect(existingCount).toBe(3);
|
||||
|
||||
const batch2 = await testCaseModel.batchCreate([
|
||||
{ datasetId, content: { input: 'Q6' }, sortOrder: existingCount + 1 },
|
||||
{ datasetId, content: { input: 'Q7' }, sortOrder: existingCount + 2 },
|
||||
]);
|
||||
|
||||
expect(batch2[0].sortOrder).toBe(4);
|
||||
expect(batch2[1].sortOrder).toBe(5);
|
||||
|
||||
// Verify total count and that new items are retrievable
|
||||
const all = await testCaseModel.findByDatasetId(datasetId);
|
||||
expect(all).toHaveLength(5);
|
||||
// Sorted by sortOrder: Q1(1), Q3(3), Q6(4), then Q5(5) & Q7(5) share same sortOrder
|
||||
expect(all[0].content.input).toBe('Q1');
|
||||
expect(all[0].sortOrder).toBe(1);
|
||||
expect(all[1].content.input).toBe('Q3');
|
||||
expect(all[1].sortOrder).toBe(3);
|
||||
expect(all[2].content.input).toBe('Q6');
|
||||
expect(all[2].sortOrder).toBe(4);
|
||||
// Q5 and Q7 both have sortOrder=5
|
||||
expect(all[3].sortOrder).toBe(5);
|
||||
expect(all[4].sortOrder).toBe(5);
|
||||
expect(new Set([all[3].content.input, all[4].content.input])).toEqual(new Set(['Q5', 'Q7']));
|
||||
});
|
||||
});
|
||||
|
||||
describe('delete', () => {
|
||||
it('should delete a test case', async () => {
|
||||
const [testCase] = await serverDB
|
||||
.insert(agentEvalTestCases)
|
||||
.values({
|
||||
userId,
|
||||
datasetId,
|
||||
content: { input: 'Delete me' },
|
||||
sortOrder: 1,
|
||||
})
|
||||
.returning();
|
||||
|
||||
await testCaseModel.delete(testCase.id);
|
||||
|
||||
const deleted = await serverDB.query.agentEvalTestCases.findFirst({
|
||||
where: eq(agentEvalTestCases.id, testCase.id),
|
||||
});
|
||||
expect(deleted).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should return 0 rowCount when test case not found', async () => {
|
||||
await testCaseModel.delete('non-existent-id');
|
||||
// No rowCount in PGlite
|
||||
});
|
||||
});
|
||||
|
||||
describe('findById', () => {
|
||||
it('should find a test case by id', async () => {
|
||||
const [testCase] = await serverDB
|
||||
.insert(agentEvalTestCases)
|
||||
.values({
|
||||
userId,
|
||||
datasetId,
|
||||
content: { input: 'Find me' },
|
||||
sortOrder: 1,
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await testCaseModel.findById(testCase.id);
|
||||
|
||||
expect(result).toBeDefined();
|
||||
expect(result?.id).toBe(testCase.id);
|
||||
expect(result?.content.input).toBe('Find me');
|
||||
});
|
||||
|
||||
it('should return undefined when test case not found', async () => {
|
||||
const result = await testCaseModel.findById('non-existent-id');
|
||||
expect(result).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('findByDatasetId', () => {
|
||||
beforeEach(async () => {
|
||||
await serverDB.insert(agentEvalTestCases).values([
|
||||
{
|
||||
userId,
|
||||
datasetId,
|
||||
content: { input: 'Test 1' },
|
||||
sortOrder: 3,
|
||||
},
|
||||
{
|
||||
userId,
|
||||
datasetId,
|
||||
content: { input: 'Test 2' },
|
||||
sortOrder: 1,
|
||||
},
|
||||
{
|
||||
userId,
|
||||
datasetId,
|
||||
content: { input: 'Test 3' },
|
||||
sortOrder: 2,
|
||||
},
|
||||
]);
|
||||
});
|
||||
|
||||
it('should find all test cases by dataset id', async () => {
|
||||
const results = await testCaseModel.findByDatasetId(datasetId);
|
||||
|
||||
expect(results).toHaveLength(3);
|
||||
});
|
||||
|
||||
it('should order by sortOrder', async () => {
|
||||
const results = await testCaseModel.findByDatasetId(datasetId);
|
||||
|
||||
expect(results[0].sortOrder).toBe(1);
|
||||
expect(results[1].sortOrder).toBe(2);
|
||||
expect(results[2].sortOrder).toBe(3);
|
||||
});
|
||||
|
||||
it('should support limit parameter', async () => {
|
||||
const results = await testCaseModel.findByDatasetId(datasetId, 2);
|
||||
|
||||
expect(results).toHaveLength(2);
|
||||
expect(results[0].sortOrder).toBe(1);
|
||||
expect(results[1].sortOrder).toBe(2);
|
||||
});
|
||||
|
||||
it('should support offset parameter', async () => {
|
||||
const results = await testCaseModel.findByDatasetId(datasetId, undefined, 1);
|
||||
|
||||
expect(results).toHaveLength(2);
|
||||
expect(results[0].sortOrder).toBe(2);
|
||||
expect(results[1].sortOrder).toBe(3);
|
||||
});
|
||||
|
||||
it('should support both limit and offset', async () => {
|
||||
const results = await testCaseModel.findByDatasetId(datasetId, 1, 1);
|
||||
|
||||
expect(results).toHaveLength(1);
|
||||
expect(results[0].sortOrder).toBe(2);
|
||||
});
|
||||
|
||||
it('should return empty array when dataset has no test cases', async () => {
|
||||
const results = await testCaseModel.findByDatasetId('non-existent-dataset');
|
||||
|
||||
expect(results).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should handle limit = 0', async () => {
|
||||
const results = await testCaseModel.findByDatasetId(datasetId, 0);
|
||||
|
||||
expect(results).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should handle offset beyond available records', async () => {
|
||||
const results = await testCaseModel.findByDatasetId(datasetId, undefined, 10);
|
||||
|
||||
expect(results).toHaveLength(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('countByDatasetId', () => {
|
||||
it('should count test cases by dataset id', async () => {
|
||||
await serverDB.insert(agentEvalTestCases).values([
|
||||
{ userId, datasetId, content: { input: 'Test 1' }, sortOrder: 1 },
|
||||
{ userId, datasetId, content: { input: 'Test 2' }, sortOrder: 2 },
|
||||
{ userId, datasetId, content: { input: 'Test 3' }, sortOrder: 3 },
|
||||
]);
|
||||
|
||||
const count = await testCaseModel.countByDatasetId(datasetId);
|
||||
|
||||
expect(count).toBe(3);
|
||||
});
|
||||
|
||||
it('should return 0 when dataset has no test cases', async () => {
|
||||
const count = await testCaseModel.countByDatasetId('non-existent-dataset');
|
||||
|
||||
expect(count).toBe(0);
|
||||
});
|
||||
|
||||
it('should return correct count after adding more test cases', async () => {
|
||||
await serverDB
|
||||
.insert(agentEvalTestCases)
|
||||
.values([{ userId, datasetId, content: { input: 'Test 1' }, sortOrder: 1 }]);
|
||||
|
||||
let count = await testCaseModel.countByDatasetId(datasetId);
|
||||
expect(count).toBe(1);
|
||||
|
||||
await serverDB
|
||||
.insert(agentEvalTestCases)
|
||||
.values([{ userId, datasetId, content: { input: 'Test 2' }, sortOrder: 2 }]);
|
||||
|
||||
count = await testCaseModel.countByDatasetId(datasetId);
|
||||
expect(count).toBe(2);
|
||||
});
|
||||
});
|
||||
|
||||
describe('update', () => {
|
||||
it('should update a test case', async () => {
|
||||
const [testCase] = await serverDB
|
||||
.insert(agentEvalTestCases)
|
||||
.values({
|
||||
userId,
|
||||
datasetId,
|
||||
content: { input: 'Original' },
|
||||
sortOrder: 1,
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await testCaseModel.update(testCase.id, {
|
||||
content: { input: 'Updated', expected: 'New answer' },
|
||||
metadata: { reviewed: true },
|
||||
});
|
||||
|
||||
expect(result).toBeDefined();
|
||||
expect(result?.content.input).toBe('Updated');
|
||||
expect(result?.content.expected).toBe('New answer');
|
||||
expect(result?.metadata).toEqual({ reviewed: true });
|
||||
expect(result?.updatedAt).toBeDefined();
|
||||
expect(result?.updatedAt.getTime()).toBeGreaterThanOrEqual(result!.createdAt.getTime());
|
||||
});
|
||||
|
||||
it('should update only sortOrder', async () => {
|
||||
const [testCase] = await serverDB
|
||||
.insert(agentEvalTestCases)
|
||||
.values({
|
||||
userId,
|
||||
datasetId,
|
||||
content: { input: 'Test' },
|
||||
sortOrder: 1,
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await testCaseModel.update(testCase.id, {
|
||||
sortOrder: 5,
|
||||
});
|
||||
|
||||
expect(result?.sortOrder).toBe(5);
|
||||
expect(result?.content.input).toBe('Test');
|
||||
});
|
||||
|
||||
it('should return undefined when test case not found', async () => {
|
||||
const result = await testCaseModel.update('non-existent-id', {
|
||||
content: { input: 'New' },
|
||||
});
|
||||
|
||||
expect(result).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should update content partially', async () => {
|
||||
const [testCase] = await serverDB
|
||||
.insert(agentEvalTestCases)
|
||||
.values({
|
||||
userId,
|
||||
datasetId,
|
||||
content: {
|
||||
input: 'Original Input',
|
||||
expected: 'Original Expected',
|
||||
},
|
||||
sortOrder: 1,
|
||||
})
|
||||
.returning();
|
||||
|
||||
const result = await testCaseModel.update(testCase.id, {
|
||||
content: {
|
||||
input: 'Original Input',
|
||||
expected: 'Updated Expected',
|
||||
},
|
||||
});
|
||||
|
||||
expect(result?.content.expected).toBe('Updated Expected');
|
||||
expect(result?.content.input).toBe('Original Input');
|
||||
});
|
||||
});
|
||||
});
|
||||
160
packages/database/src/models/agentEval/benchmark.ts
Normal file
160
packages/database/src/models/agentEval/benchmark.ts
Normal file
|
|
@ -0,0 +1,160 @@
|
|||
import { and, count, desc, eq, getTableColumns, sql } from 'drizzle-orm';
|
||||
|
||||
import {
|
||||
agentEvalBenchmarks,
|
||||
agentEvalDatasets,
|
||||
agentEvalRuns,
|
||||
agentEvalTestCases,
|
||||
type NewAgentEvalBenchmark,
|
||||
} from '../../schemas';
|
||||
import { type LobeChatDatabase } from '../../type';
|
||||
|
||||
export class AgentEvalBenchmarkModel {
|
||||
private userId: string;
|
||||
private db: LobeChatDatabase;
|
||||
|
||||
constructor(db: LobeChatDatabase, userId: string) {
|
||||
this.db = db;
|
||||
this.userId = userId;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a new benchmark
|
||||
*/
|
||||
create = async (params: NewAgentEvalBenchmark) => {
|
||||
const [result] = await this.db.insert(agentEvalBenchmarks).values(params).returning();
|
||||
return result;
|
||||
};
|
||||
|
||||
/**
|
||||
* Delete a benchmark by id (only user-created benchmarks)
|
||||
*/
|
||||
delete = async (id: string) => {
|
||||
return this.db
|
||||
.delete(agentEvalBenchmarks)
|
||||
.where(and(eq(agentEvalBenchmarks.id, id), eq(agentEvalBenchmarks.isSystem, false)));
|
||||
};
|
||||
|
||||
/**
|
||||
* Query benchmarks (system + user-created)
|
||||
* @param includeSystem - Whether to include system benchmarks (default: true)
|
||||
*/
|
||||
query = async (includeSystem = true) => {
|
||||
const conditions = includeSystem ? undefined : eq(agentEvalBenchmarks.isSystem, false);
|
||||
|
||||
const datasetCountSq = this.db
|
||||
.select({
|
||||
benchmarkId: agentEvalDatasets.benchmarkId,
|
||||
count: count().as('dataset_count'),
|
||||
})
|
||||
.from(agentEvalDatasets)
|
||||
.groupBy(agentEvalDatasets.benchmarkId)
|
||||
.as('dc');
|
||||
|
||||
const testCaseCountSq = this.db
|
||||
.select({
|
||||
benchmarkId: agentEvalDatasets.benchmarkId,
|
||||
count: count().as('test_case_count'),
|
||||
})
|
||||
.from(agentEvalTestCases)
|
||||
.innerJoin(agentEvalDatasets, eq(agentEvalTestCases.datasetId, agentEvalDatasets.id))
|
||||
.groupBy(agentEvalDatasets.benchmarkId)
|
||||
.as('tc');
|
||||
|
||||
const runCountSq = this.db
|
||||
.select({
|
||||
benchmarkId: agentEvalDatasets.benchmarkId,
|
||||
count: count().as('run_count'),
|
||||
})
|
||||
.from(agentEvalRuns)
|
||||
.innerJoin(agentEvalDatasets, eq(agentEvalRuns.datasetId, agentEvalDatasets.id))
|
||||
.where(eq(agentEvalRuns.userId, this.userId))
|
||||
.groupBy(agentEvalDatasets.benchmarkId)
|
||||
.as('rc');
|
||||
|
||||
const rows = await this.db
|
||||
.select({
|
||||
...getTableColumns(agentEvalBenchmarks),
|
||||
datasetCount: sql<number>`COALESCE(${datasetCountSq.count}, 0)`.as('datasetCount'),
|
||||
testCaseCount: sql<number>`COALESCE(${testCaseCountSq.count}, 0)`.as('testCaseCount'),
|
||||
runCount: sql<number>`COALESCE(${runCountSq.count}, 0)`.as('runCount'),
|
||||
})
|
||||
.from(agentEvalBenchmarks)
|
||||
.leftJoin(datasetCountSq, eq(agentEvalBenchmarks.id, datasetCountSq.benchmarkId))
|
||||
.leftJoin(testCaseCountSq, eq(agentEvalBenchmarks.id, testCaseCountSq.benchmarkId))
|
||||
.leftJoin(runCountSq, eq(agentEvalBenchmarks.id, runCountSq.benchmarkId))
|
||||
.where(conditions)
|
||||
.orderBy(desc(agentEvalBenchmarks.createdAt));
|
||||
|
||||
// Fetch recent runs for each benchmark
|
||||
const benchmarksWithRuns = await Promise.all(
|
||||
rows.map(async (row) => {
|
||||
const recentRuns = await this.db
|
||||
.select()
|
||||
.from(agentEvalRuns)
|
||||
.innerJoin(agentEvalDatasets, eq(agentEvalRuns.datasetId, agentEvalDatasets.id))
|
||||
.where(
|
||||
and(eq(agentEvalDatasets.benchmarkId, row.id), eq(agentEvalRuns.userId, this.userId)),
|
||||
)
|
||||
.orderBy(desc(agentEvalRuns.createdAt))
|
||||
.limit(5);
|
||||
|
||||
return {
|
||||
id: row.id,
|
||||
identifier: row.identifier,
|
||||
name: row.name,
|
||||
description: row.description,
|
||||
rubrics: row.rubrics,
|
||||
referenceUrl: row.referenceUrl,
|
||||
metadata: row.metadata,
|
||||
tags: (row as any).tags,
|
||||
isSystem: row.isSystem,
|
||||
createdAt: row.createdAt,
|
||||
updatedAt: row.updatedAt,
|
||||
datasetCount: Number(row.datasetCount),
|
||||
runCount: Number(row.runCount),
|
||||
testCaseCount: Number(row.testCaseCount),
|
||||
recentRuns: recentRuns.map((r) => r.agent_eval_runs),
|
||||
};
|
||||
}),
|
||||
);
|
||||
|
||||
return benchmarksWithRuns;
|
||||
};
|
||||
|
||||
/**
|
||||
* Find benchmark by id
|
||||
*/
|
||||
findById = async (id: string) => {
|
||||
const [result] = await this.db
|
||||
.select()
|
||||
.from(agentEvalBenchmarks)
|
||||
.where(eq(agentEvalBenchmarks.id, id))
|
||||
.limit(1);
|
||||
return result;
|
||||
};
|
||||
|
||||
/**
|
||||
* Find benchmark by identifier
|
||||
*/
|
||||
findByIdentifier = async (identifier: string) => {
|
||||
const [result] = await this.db
|
||||
.select()
|
||||
.from(agentEvalBenchmarks)
|
||||
.where(eq(agentEvalBenchmarks.identifier, identifier))
|
||||
.limit(1);
|
||||
return result;
|
||||
};
|
||||
|
||||
/**
|
||||
* Update benchmark (only user-created benchmarks)
|
||||
*/
|
||||
update = async (id: string, value: Partial<NewAgentEvalBenchmark>) => {
|
||||
const [result] = await this.db
|
||||
.update(agentEvalBenchmarks)
|
||||
.set({ ...value, updatedAt: new Date() })
|
||||
.where(and(eq(agentEvalBenchmarks.id, id), eq(agentEvalBenchmarks.isSystem, false)))
|
||||
.returning();
|
||||
return result;
|
||||
};
|
||||
}
|
||||
105
packages/database/src/models/agentEval/dataset.ts
Normal file
105
packages/database/src/models/agentEval/dataset.ts
Normal file
|
|
@ -0,0 +1,105 @@
|
|||
import { and, asc, count, desc, eq, isNull, or } from 'drizzle-orm';
|
||||
|
||||
import { agentEvalDatasets, agentEvalTestCases, type NewAgentEvalDataset } from '../../schemas';
|
||||
import { type LobeChatDatabase } from '../../type';
|
||||
|
||||
export class AgentEvalDatasetModel {
|
||||
private userId: string;
|
||||
private db: LobeChatDatabase;
|
||||
|
||||
constructor(db: LobeChatDatabase, userId: string) {
|
||||
this.db = db;
|
||||
this.userId = userId;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a new dataset
|
||||
*/
|
||||
create = async (params: NewAgentEvalDataset) => {
|
||||
const [result] = await this.db
|
||||
.insert(agentEvalDatasets)
|
||||
.values({ ...params, userId: this.userId })
|
||||
.returning();
|
||||
return result;
|
||||
};
|
||||
|
||||
/**
|
||||
* Delete a dataset by id
|
||||
*/
|
||||
delete = async (id: string) => {
|
||||
return this.db
|
||||
.delete(agentEvalDatasets)
|
||||
.where(and(eq(agentEvalDatasets.id, id), eq(agentEvalDatasets.userId, this.userId)));
|
||||
};
|
||||
|
||||
/**
|
||||
* Query datasets (system + user-owned) with test case counts
|
||||
* @param benchmarkId - Optional benchmark filter
|
||||
*/
|
||||
query = async (benchmarkId?: string) => {
|
||||
const conditions = [
|
||||
or(eq(agentEvalDatasets.userId, this.userId), isNull(agentEvalDatasets.userId)),
|
||||
];
|
||||
|
||||
if (benchmarkId) {
|
||||
conditions.push(eq(agentEvalDatasets.benchmarkId, benchmarkId));
|
||||
}
|
||||
|
||||
return this.db
|
||||
.select({
|
||||
benchmarkId: agentEvalDatasets.benchmarkId,
|
||||
createdAt: agentEvalDatasets.createdAt,
|
||||
description: agentEvalDatasets.description,
|
||||
id: agentEvalDatasets.id,
|
||||
identifier: agentEvalDatasets.identifier,
|
||||
metadata: agentEvalDatasets.metadata,
|
||||
name: agentEvalDatasets.name,
|
||||
testCaseCount: count(agentEvalTestCases.id).as('testCaseCount'),
|
||||
updatedAt: agentEvalDatasets.updatedAt,
|
||||
userId: agentEvalDatasets.userId,
|
||||
})
|
||||
.from(agentEvalDatasets)
|
||||
.leftJoin(agentEvalTestCases, eq(agentEvalDatasets.id, agentEvalTestCases.datasetId))
|
||||
.where(and(...conditions))
|
||||
.groupBy(agentEvalDatasets.id)
|
||||
.orderBy(desc(agentEvalDatasets.createdAt));
|
||||
};
|
||||
|
||||
/**
|
||||
* Find dataset by id (with test cases)
|
||||
*/
|
||||
findById = async (id: string) => {
|
||||
const [dataset] = await this.db
|
||||
.select()
|
||||
.from(agentEvalDatasets)
|
||||
.where(
|
||||
and(
|
||||
eq(agentEvalDatasets.id, id),
|
||||
or(eq(agentEvalDatasets.userId, this.userId), isNull(agentEvalDatasets.userId)),
|
||||
),
|
||||
)
|
||||
.limit(1);
|
||||
|
||||
if (!dataset) return undefined;
|
||||
|
||||
const testCases = await this.db
|
||||
.select()
|
||||
.from(agentEvalTestCases)
|
||||
.where(eq(agentEvalTestCases.datasetId, id))
|
||||
.orderBy(asc(agentEvalTestCases.sortOrder));
|
||||
|
||||
return { ...dataset, testCases };
|
||||
};
|
||||
|
||||
/**
|
||||
* Update dataset
|
||||
*/
|
||||
update = async (id: string, value: Partial<NewAgentEvalDataset>) => {
|
||||
const [result] = await this.db
|
||||
.update(agentEvalDatasets)
|
||||
.set({ ...value, updatedAt: new Date() })
|
||||
.where(and(eq(agentEvalDatasets.id, id), eq(agentEvalDatasets.userId, this.userId)))
|
||||
.returning();
|
||||
return result;
|
||||
};
|
||||
}
|
||||
5
packages/database/src/models/agentEval/index.ts
Normal file
5
packages/database/src/models/agentEval/index.ts
Normal file
|
|
@ -0,0 +1,5 @@
|
|||
export * from './benchmark';
|
||||
export * from './dataset';
|
||||
export * from './run';
|
||||
export * from './runTopic';
|
||||
export * from './testCase';
|
||||
116
packages/database/src/models/agentEval/run.ts
Normal file
116
packages/database/src/models/agentEval/run.ts
Normal file
|
|
@ -0,0 +1,116 @@
|
|||
import { and, count, desc, eq, inArray } from 'drizzle-orm';
|
||||
|
||||
import { agentEvalDatasets, agentEvalRuns, type NewAgentEvalRun } from '../../schemas';
|
||||
import { type LobeChatDatabase } from '../../type';
|
||||
|
||||
export class AgentEvalRunModel {
|
||||
private userId: string;
|
||||
private db: LobeChatDatabase;
|
||||
|
||||
constructor(db: LobeChatDatabase, userId: string) {
|
||||
this.db = db;
|
||||
this.userId = userId;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a new run
|
||||
*/
|
||||
create = async (params: Omit<NewAgentEvalRun, 'userId'>) => {
|
||||
const [result] = await this.db
|
||||
.insert(agentEvalRuns)
|
||||
.values({ ...params, userId: this.userId })
|
||||
.returning();
|
||||
return result;
|
||||
};
|
||||
|
||||
/**
|
||||
* Query runs with optional filters
|
||||
*/
|
||||
query = async (filter?: {
|
||||
benchmarkId?: string;
|
||||
datasetId?: string;
|
||||
limit?: number;
|
||||
offset?: number;
|
||||
status?: 'idle' | 'pending' | 'running' | 'completed' | 'failed' | 'aborted';
|
||||
}) => {
|
||||
const conditions = [eq(agentEvalRuns.userId, this.userId)];
|
||||
|
||||
if (filter?.datasetId) {
|
||||
conditions.push(eq(agentEvalRuns.datasetId, filter.datasetId));
|
||||
}
|
||||
|
||||
if (filter?.benchmarkId) {
|
||||
const datasetIds = this.db
|
||||
.select({ id: agentEvalDatasets.id })
|
||||
.from(agentEvalDatasets)
|
||||
.where(eq(agentEvalDatasets.benchmarkId, filter.benchmarkId));
|
||||
|
||||
conditions.push(inArray(agentEvalRuns.datasetId, datasetIds));
|
||||
}
|
||||
|
||||
if (filter?.status) {
|
||||
conditions.push(eq(agentEvalRuns.status, filter.status));
|
||||
}
|
||||
|
||||
const query = this.db
|
||||
.select()
|
||||
.from(agentEvalRuns)
|
||||
.where(and(...conditions))
|
||||
.orderBy(desc(agentEvalRuns.createdAt))
|
||||
.$dynamic();
|
||||
|
||||
if (filter?.limit !== undefined) {
|
||||
query.limit(filter.limit);
|
||||
}
|
||||
|
||||
if (filter?.offset !== undefined) {
|
||||
query.offset(filter.offset);
|
||||
}
|
||||
|
||||
return query;
|
||||
};
|
||||
|
||||
/**
|
||||
* Find run by id
|
||||
*/
|
||||
findById = async (id: string) => {
|
||||
const [result] = await this.db
|
||||
.select()
|
||||
.from(agentEvalRuns)
|
||||
.where(and(eq(agentEvalRuns.id, id), eq(agentEvalRuns.userId, this.userId)))
|
||||
.limit(1);
|
||||
return result;
|
||||
};
|
||||
|
||||
/**
|
||||
* Update run
|
||||
*/
|
||||
update = async (id: string, value: Partial<NewAgentEvalRun>) => {
|
||||
const [result] = await this.db
|
||||
.update(agentEvalRuns)
|
||||
.set({ ...value, updatedAt: new Date() })
|
||||
.where(and(eq(agentEvalRuns.id, id), eq(agentEvalRuns.userId, this.userId)))
|
||||
.returning();
|
||||
return result;
|
||||
};
|
||||
|
||||
/**
|
||||
* Delete run (only user-created runs)
|
||||
*/
|
||||
delete = async (id: string) => {
|
||||
return this.db
|
||||
.delete(agentEvalRuns)
|
||||
.where(and(eq(agentEvalRuns.id, id), eq(agentEvalRuns.userId, this.userId)));
|
||||
};
|
||||
|
||||
/**
|
||||
* Count runs by dataset id
|
||||
*/
|
||||
countByDatasetId = async (datasetId: string) => {
|
||||
const result = await this.db
|
||||
.select({ value: count() })
|
||||
.from(agentEvalRuns)
|
||||
.where(and(eq(agentEvalRuns.datasetId, datasetId), eq(agentEvalRuns.userId, this.userId)));
|
||||
return Number(result[0]?.value) || 0;
|
||||
};
|
||||
}
|
||||
213
packages/database/src/models/agentEval/runTopic.ts
Normal file
213
packages/database/src/models/agentEval/runTopic.ts
Normal file
|
|
@ -0,0 +1,213 @@
|
|||
import { and, asc, desc, eq, lt, or } from 'drizzle-orm';
|
||||
|
||||
import {
|
||||
agentEvalRuns,
|
||||
type AgentEvalRunTopicItem,
|
||||
agentEvalRunTopics,
|
||||
agentEvalTestCases,
|
||||
type NewAgentEvalRunTopic,
|
||||
topics,
|
||||
} from '../../schemas';
|
||||
import { type LobeChatDatabase } from '../../type';
|
||||
|
||||
export class AgentEvalRunTopicModel {
|
||||
private userId: string;
|
||||
private db: LobeChatDatabase;
|
||||
|
||||
constructor(db: LobeChatDatabase, userId: string) {
|
||||
this.db = db;
|
||||
this.userId = userId;
|
||||
}
|
||||
|
||||
/**
|
||||
* Batch create run-topic associations
|
||||
*/
|
||||
batchCreate = async (items: Omit<NewAgentEvalRunTopic, 'userId'>[]) => {
|
||||
if (items.length === 0) return [];
|
||||
const withUserId = items.map((item) => ({ ...item, userId: this.userId }));
|
||||
return this.db.insert(agentEvalRunTopics).values(withUserId).returning();
|
||||
};
|
||||
|
||||
/**
|
||||
* Find all topics for a run (with TestCase and Topic details)
|
||||
*/
|
||||
findByRunId = async (runId: string) => {
|
||||
const rows = await this.db
|
||||
.select({
|
||||
createdAt: agentEvalRunTopics.createdAt,
|
||||
evalResult: agentEvalRunTopics.evalResult,
|
||||
passed: agentEvalRunTopics.passed,
|
||||
runId: agentEvalRunTopics.runId,
|
||||
score: agentEvalRunTopics.score,
|
||||
status: agentEvalRunTopics.status,
|
||||
testCase: agentEvalTestCases,
|
||||
testCaseId: agentEvalRunTopics.testCaseId,
|
||||
topic: topics,
|
||||
topicId: agentEvalRunTopics.topicId,
|
||||
})
|
||||
.from(agentEvalRunTopics)
|
||||
.leftJoin(agentEvalTestCases, eq(agentEvalRunTopics.testCaseId, agentEvalTestCases.id))
|
||||
.leftJoin(topics, eq(agentEvalRunTopics.topicId, topics.id))
|
||||
.where(and(eq(agentEvalRunTopics.runId, runId), eq(agentEvalRunTopics.userId, this.userId)))
|
||||
.orderBy(asc(agentEvalTestCases.sortOrder));
|
||||
|
||||
return rows;
|
||||
};
|
||||
|
||||
/**
|
||||
* Delete all run-topic associations for a run
|
||||
*/
|
||||
deleteByRunId = async (runId: string) => {
|
||||
return this.db
|
||||
.delete(agentEvalRunTopics)
|
||||
.where(and(eq(agentEvalRunTopics.runId, runId), eq(agentEvalRunTopics.userId, this.userId)));
|
||||
};
|
||||
|
||||
/**
|
||||
* Find all runs that used a specific test case
|
||||
*/
|
||||
findByTestCaseId = async (testCaseId: string) => {
|
||||
const rows = await this.db
|
||||
.select({
|
||||
createdAt: agentEvalRunTopics.createdAt,
|
||||
evalResult: agentEvalRunTopics.evalResult,
|
||||
passed: agentEvalRunTopics.passed,
|
||||
run: agentEvalRuns,
|
||||
runId: agentEvalRunTopics.runId,
|
||||
score: agentEvalRunTopics.score,
|
||||
testCaseId: agentEvalRunTopics.testCaseId,
|
||||
topic: topics,
|
||||
topicId: agentEvalRunTopics.topicId,
|
||||
})
|
||||
.from(agentEvalRunTopics)
|
||||
.leftJoin(agentEvalRuns, eq(agentEvalRunTopics.runId, agentEvalRuns.id))
|
||||
.leftJoin(topics, eq(agentEvalRunTopics.topicId, topics.id))
|
||||
.where(
|
||||
and(
|
||||
eq(agentEvalRunTopics.testCaseId, testCaseId),
|
||||
eq(agentEvalRunTopics.userId, this.userId),
|
||||
),
|
||||
)
|
||||
.orderBy(desc(agentEvalRunTopics.createdAt));
|
||||
|
||||
return rows;
|
||||
};
|
||||
|
||||
/**
|
||||
* Find a specific run-topic association by run and test case
|
||||
*/
|
||||
findByRunAndTestCase = async (runId: string, testCaseId: string) => {
|
||||
const [row] = await this.db
|
||||
.select({
|
||||
createdAt: agentEvalRunTopics.createdAt,
|
||||
evalResult: agentEvalRunTopics.evalResult,
|
||||
passed: agentEvalRunTopics.passed,
|
||||
runId: agentEvalRunTopics.runId,
|
||||
score: agentEvalRunTopics.score,
|
||||
status: agentEvalRunTopics.status,
|
||||
testCase: agentEvalTestCases,
|
||||
testCaseId: agentEvalRunTopics.testCaseId,
|
||||
topic: topics,
|
||||
topicId: agentEvalRunTopics.topicId,
|
||||
})
|
||||
.from(agentEvalRunTopics)
|
||||
.leftJoin(agentEvalTestCases, eq(agentEvalRunTopics.testCaseId, agentEvalTestCases.id))
|
||||
.leftJoin(topics, eq(agentEvalRunTopics.topicId, topics.id))
|
||||
.where(
|
||||
and(
|
||||
eq(agentEvalRunTopics.runId, runId),
|
||||
eq(agentEvalRunTopics.testCaseId, testCaseId),
|
||||
eq(agentEvalRunTopics.userId, this.userId),
|
||||
),
|
||||
)
|
||||
.limit(1);
|
||||
|
||||
return row;
|
||||
};
|
||||
|
||||
/**
|
||||
* Batch mark timed-out RunTopics:
|
||||
* Per-row check: created_at + timeoutMs < NOW()
|
||||
* Returns the updated rows so callers can compute per-row duration.
|
||||
*/
|
||||
batchMarkAborted = async (runId: string) => {
|
||||
return this.db
|
||||
.update(agentEvalRunTopics)
|
||||
.set({ status: 'error', evalResult: { error: 'Aborted' } })
|
||||
.where(
|
||||
and(
|
||||
eq(agentEvalRunTopics.userId, this.userId),
|
||||
eq(agentEvalRunTopics.runId, runId),
|
||||
or(eq(agentEvalRunTopics.status, 'pending'), eq(agentEvalRunTopics.status, 'running')),
|
||||
),
|
||||
)
|
||||
.returning();
|
||||
};
|
||||
|
||||
batchMarkTimeout = async (runId: string, timeoutMs: number) => {
|
||||
const deadline = new Date(Date.now() - timeoutMs);
|
||||
return this.db
|
||||
.update(agentEvalRunTopics)
|
||||
.set({ status: 'timeout' })
|
||||
.where(
|
||||
and(
|
||||
eq(agentEvalRunTopics.userId, this.userId),
|
||||
eq(agentEvalRunTopics.runId, runId),
|
||||
eq(agentEvalRunTopics.status, 'running'),
|
||||
lt(agentEvalRunTopics.createdAt, deadline),
|
||||
),
|
||||
)
|
||||
.returning();
|
||||
};
|
||||
|
||||
deleteByRunAndTestCase = async (runId: string, testCaseId: string) => {
|
||||
return this.db
|
||||
.delete(agentEvalRunTopics)
|
||||
.where(
|
||||
and(
|
||||
eq(agentEvalRunTopics.userId, this.userId),
|
||||
eq(agentEvalRunTopics.runId, runId),
|
||||
eq(agentEvalRunTopics.testCaseId, testCaseId),
|
||||
),
|
||||
)
|
||||
.returning();
|
||||
};
|
||||
|
||||
/**
|
||||
* Delete error/timeout RunTopics for a run, returning deleted rows
|
||||
*/
|
||||
deleteErrorRunTopics = async (runId: string) => {
|
||||
return this.db
|
||||
.delete(agentEvalRunTopics)
|
||||
.where(
|
||||
and(
|
||||
eq(agentEvalRunTopics.userId, this.userId),
|
||||
eq(agentEvalRunTopics.runId, runId),
|
||||
or(eq(agentEvalRunTopics.status, 'error'), eq(agentEvalRunTopics.status, 'timeout')),
|
||||
),
|
||||
)
|
||||
.returning();
|
||||
};
|
||||
|
||||
/**
|
||||
* Update a RunTopic by composite key (runId + topicId)
|
||||
*/
|
||||
updateByRunAndTopic = async (
|
||||
runId: string,
|
||||
topicId: string,
|
||||
value: Pick<Partial<AgentEvalRunTopicItem>, 'evalResult' | 'passed' | 'score' | 'status'>,
|
||||
) => {
|
||||
const [result] = await this.db
|
||||
.update(agentEvalRunTopics)
|
||||
.set(value)
|
||||
.where(
|
||||
and(
|
||||
eq(agentEvalRunTopics.userId, this.userId),
|
||||
eq(agentEvalRunTopics.runId, runId),
|
||||
eq(agentEvalRunTopics.topicId, topicId),
|
||||
),
|
||||
)
|
||||
.returning();
|
||||
return result;
|
||||
};
|
||||
}
|
||||
115
packages/database/src/models/agentEval/testCase.ts
Normal file
115
packages/database/src/models/agentEval/testCase.ts
Normal file
|
|
@ -0,0 +1,115 @@
|
|||
import { and, count, eq, sql } from 'drizzle-orm';
|
||||
|
||||
import { agentEvalTestCases, type NewAgentEvalTestCase } from '../../schemas';
|
||||
import { type LobeChatDatabase } from '../../type';
|
||||
|
||||
export class AgentEvalTestCaseModel {
|
||||
private userId: string;
|
||||
private db: LobeChatDatabase;
|
||||
|
||||
constructor(db: LobeChatDatabase, userId: string) {
|
||||
this.db = db;
|
||||
this.userId = userId;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a single test case
|
||||
*/
|
||||
create = async (params: Omit<NewAgentEvalTestCase, 'userId'>) => {
|
||||
let finalParams: NewAgentEvalTestCase = { ...params, userId: this.userId };
|
||||
|
||||
if (finalParams.sortOrder === undefined || finalParams.sortOrder === null) {
|
||||
const [maxResult] = await this.db
|
||||
.select({ max: sql<number>`COALESCE(MAX(${agentEvalTestCases.sortOrder}), 0)` })
|
||||
.from(agentEvalTestCases)
|
||||
.where(eq(agentEvalTestCases.datasetId, finalParams.datasetId));
|
||||
|
||||
finalParams = { ...finalParams, sortOrder: maxResult.max + 1 };
|
||||
}
|
||||
|
||||
const [result] = await this.db.insert(agentEvalTestCases).values(finalParams).returning();
|
||||
return result;
|
||||
};
|
||||
|
||||
/**
|
||||
* Batch create test cases
|
||||
*/
|
||||
batchCreate = async (cases: Omit<NewAgentEvalTestCase, 'userId'>[]) => {
|
||||
const withUserId = cases.map((c) => ({ ...c, userId: this.userId }));
|
||||
return this.db.insert(agentEvalTestCases).values(withUserId).returning();
|
||||
};
|
||||
|
||||
/**
|
||||
* Delete a test case by id
|
||||
*/
|
||||
delete = async (id: string) => {
|
||||
return this.db
|
||||
.delete(agentEvalTestCases)
|
||||
.where(and(eq(agentEvalTestCases.id, id), eq(agentEvalTestCases.userId, this.userId)));
|
||||
};
|
||||
|
||||
/**
|
||||
* Find test case by id
|
||||
*/
|
||||
findById = async (id: string) => {
|
||||
const [result] = await this.db
|
||||
.select()
|
||||
.from(agentEvalTestCases)
|
||||
.where(and(eq(agentEvalTestCases.id, id), eq(agentEvalTestCases.userId, this.userId)))
|
||||
.limit(1);
|
||||
return result;
|
||||
};
|
||||
|
||||
/**
|
||||
* Find all test cases by dataset id with pagination
|
||||
*/
|
||||
findByDatasetId = async (datasetId: string, limit?: number, offset?: number) => {
|
||||
const query = this.db
|
||||
.select()
|
||||
.from(agentEvalTestCases)
|
||||
.where(
|
||||
and(
|
||||
eq(agentEvalTestCases.datasetId, datasetId),
|
||||
eq(agentEvalTestCases.userId, this.userId),
|
||||
),
|
||||
)
|
||||
.orderBy(agentEvalTestCases.sortOrder);
|
||||
|
||||
if (limit !== undefined) {
|
||||
query.limit(limit);
|
||||
}
|
||||
if (offset !== undefined) {
|
||||
query.offset(offset);
|
||||
}
|
||||
|
||||
return query;
|
||||
};
|
||||
|
||||
/**
|
||||
* Count test cases by dataset id
|
||||
*/
|
||||
countByDatasetId = async (datasetId: string) => {
|
||||
const result = await this.db
|
||||
.select({ value: count() })
|
||||
.from(agentEvalTestCases)
|
||||
.where(
|
||||
and(
|
||||
eq(agentEvalTestCases.datasetId, datasetId),
|
||||
eq(agentEvalTestCases.userId, this.userId),
|
||||
),
|
||||
);
|
||||
return Number(result[0]?.value) || 0;
|
||||
};
|
||||
|
||||
/**
|
||||
* Update test case
|
||||
*/
|
||||
update = async (id: string, value: Partial<Omit<NewAgentEvalTestCase, 'userId'>>) => {
|
||||
const [result] = await this.db
|
||||
.update(agentEvalTestCases)
|
||||
.set({ ...value, updatedAt: new Date() })
|
||||
.where(and(eq(agentEvalTestCases.id, id), eq(agentEvalTestCases.userId, this.userId)))
|
||||
.returning();
|
||||
return result;
|
||||
};
|
||||
}
|
||||
|
|
@ -43,6 +43,7 @@ import {
|
|||
} from 'drizzle-orm';
|
||||
|
||||
import { merge } from '@/utils/merge';
|
||||
import { sanitizeNullBytes } from '@/utils/sanitizeNullBytes';
|
||||
import { today } from '@/utils/time';
|
||||
|
||||
import {
|
||||
|
|
@ -201,7 +202,6 @@ export class MessageModel {
|
|||
// 1. get basic messages with joins, excluding messages that belong to MessageGroups
|
||||
const result = await this.db
|
||||
.select({
|
||||
/* eslint-disable sort-keys-fix/sort-keys-fix*/
|
||||
id: messages.id,
|
||||
role: messages.role,
|
||||
content: messages.content,
|
||||
|
|
@ -463,8 +463,8 @@ export class MessageModel {
|
|||
})),
|
||||
|
||||
extra: {
|
||||
model: model,
|
||||
provider: provider,
|
||||
model,
|
||||
provider,
|
||||
translate,
|
||||
tts: ttsId
|
||||
? {
|
||||
|
|
@ -540,7 +540,6 @@ export class MessageModel {
|
|||
// 1. Query messages with joins
|
||||
const result = await this.db
|
||||
.select({
|
||||
/* eslint-disable sort-keys-fix/sort-keys-fix*/
|
||||
id: messages.id,
|
||||
role: messages.role,
|
||||
content: messages.content,
|
||||
|
|
@ -736,8 +735,8 @@ export class MessageModel {
|
|||
})),
|
||||
|
||||
extra: {
|
||||
model: model,
|
||||
provider: provider,
|
||||
model,
|
||||
provider,
|
||||
translate,
|
||||
tts: ttsId
|
||||
? {
|
||||
|
|
@ -1259,11 +1258,11 @@ export class MessageModel {
|
|||
if (message.role === 'tool') {
|
||||
await trx.insert(messagePlugins).values({
|
||||
apiName: plugin?.apiName,
|
||||
arguments: plugin?.arguments,
|
||||
arguments: sanitizeNullBytes(plugin?.arguments),
|
||||
id,
|
||||
identifier: plugin?.identifier,
|
||||
intervention: pluginIntervention,
|
||||
state: pluginState,
|
||||
state: sanitizeNullBytes(pluginState),
|
||||
toolCallId: message.tool_call_id,
|
||||
type: plugin?.type,
|
||||
userId: this.userId,
|
||||
|
|
|
|||
|
|
@ -1,9 +1,8 @@
|
|||
import type { RAGEvalDataSetItem } from '@lobechat/types';
|
||||
import { and, desc, eq } from 'drizzle-orm';
|
||||
|
||||
import type { NewEvalDatasetsItem } from '../../../schemas';
|
||||
import { evalDatasets } from '../../../schemas';
|
||||
import type { LobeChatDatabase } from '../../../type';
|
||||
import { NewEvalDatasetsItem, evalDatasets } from '../../schemas';
|
||||
import { LobeChatDatabase } from '../../type';
|
||||
|
||||
export class EvalDatasetModel {
|
||||
private userId: string;
|
||||
|
|
@ -1,9 +1,8 @@
|
|||
import type { EvalDatasetRecordRefFile } from '@lobechat/types';
|
||||
import { and, eq, inArray } from 'drizzle-orm';
|
||||
|
||||
import type { NewEvalDatasetRecordsItem } from '../../../schemas';
|
||||
import { evalDatasetRecords, files } from '../../../schemas';
|
||||
import type { LobeChatDatabase } from '../../../type';
|
||||
import { NewEvalDatasetRecordsItem, evalDatasetRecords, files } from '../../schemas';
|
||||
import { LobeChatDatabase } from '../../type';
|
||||
|
||||
export class EvalDatasetRecordModel {
|
||||
private userId: string;
|
||||
|
|
@ -3,9 +3,13 @@ import { EvalEvaluationStatus } from '@lobechat/types';
|
|||
import type { SQL } from 'drizzle-orm';
|
||||
import { and, count, desc, eq, inArray } from 'drizzle-orm';
|
||||
|
||||
import type { NewEvalEvaluationItem } from '../../../schemas';
|
||||
import { evalDatasets, evalEvaluation, evaluationRecords } from '../../../schemas';
|
||||
import type { LobeChatDatabase } from '../../../type';
|
||||
import {
|
||||
NewEvalEvaluationItem,
|
||||
evalDatasets,
|
||||
evalEvaluation,
|
||||
evaluationRecords,
|
||||
} from '../../schemas';
|
||||
import { LobeChatDatabase } from '../../type';
|
||||
|
||||
export class EvalEvaluationModel {
|
||||
private userId: string;
|
||||
|
|
@ -1,7 +1,7 @@
|
|||
import { and, eq } from 'drizzle-orm';
|
||||
|
||||
import { NewEvaluationRecordsItem, evaluationRecords } from '../../../schemas';
|
||||
import { LobeChatDatabase } from '../../../type';
|
||||
import { NewEvaluationRecordsItem, evaluationRecords } from '../../schemas';
|
||||
import { LobeChatDatabase } from '../../type';
|
||||
|
||||
export class EvaluationRecordModel {
|
||||
private userId: string;
|
||||
|
|
@ -455,6 +455,7 @@ export class TopicModel {
|
|||
id: params.id || this.genId(),
|
||||
sessionId: params.groupId ? null : params.sessionId,
|
||||
title: params.title,
|
||||
trigger: params.trigger,
|
||||
userId: this.userId,
|
||||
})),
|
||||
)
|
||||
|
|
|
|||
33
packages/eval-dataset-parser/__tests__/detectFormat.test.ts
Normal file
33
packages/eval-dataset-parser/__tests__/detectFormat.test.ts
Normal file
|
|
@ -0,0 +1,33 @@
|
|||
import { describe, expect, it } from 'vitest';
|
||||
|
||||
import { detectFormat } from '../src';
|
||||
|
||||
describe('detectFormat', () => {
|
||||
it('should detect CSV by filename', () => {
|
||||
expect(detectFormat('', 'data.csv')).toBe('csv');
|
||||
});
|
||||
|
||||
it('should detect XLSX by filename', () => {
|
||||
expect(detectFormat('', 'data.xlsx')).toBe('xlsx');
|
||||
});
|
||||
|
||||
it('should detect JSON by filename', () => {
|
||||
expect(detectFormat('', 'data.json')).toBe('json');
|
||||
});
|
||||
|
||||
it('should detect JSONL by filename', () => {
|
||||
expect(detectFormat('', 'data.jsonl')).toBe('jsonl');
|
||||
});
|
||||
|
||||
it('should detect JSON from content', () => {
|
||||
expect(detectFormat('[{"a":1}]')).toBe('json');
|
||||
});
|
||||
|
||||
it('should detect JSONL from content', () => {
|
||||
expect(detectFormat('{"a":1}\n{"a":2}')).toBe('jsonl');
|
||||
});
|
||||
|
||||
it('should default to CSV for unknown content', () => {
|
||||
expect(detectFormat('col1,col2\nval1,val2')).toBe('csv');
|
||||
});
|
||||
});
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
id,prompt,type,answer
|
||||
1,What is 2+2?,math,4
|
||||
2,Capital of France?,geography,Paris
|
||||
3,Who wrote Hamlet?,literature,Shakespeare
|
||||
|
|
|
@ -0,0 +1,5 @@
|
|||
[
|
||||
{"input": "What is 2+2?", "expected": "4", "tags": "math"},
|
||||
{"input": "Capital of France?", "expected": "Paris", "tags": "geography"},
|
||||
{"input": "Who wrote Hamlet?", "expected": "Shakespeare", "tags": "literature"}
|
||||
]
|
||||
|
|
@ -0,0 +1,3 @@
|
|||
{"question":"What is 2+2?","choices":["3","4","5","6"],"answer":1}
|
||||
{"question":"Capital of France?","choices":["London","Berlin","Paris","Rome"],"answer":2}
|
||||
{"question":"Who wrote Hamlet?","choices":["Dickens","Shakespeare","Austen","Twain"],"answer":1}
|
||||
85
packages/eval-dataset-parser/__tests__/parseDataset.test.ts
Normal file
85
packages/eval-dataset-parser/__tests__/parseDataset.test.ts
Normal file
|
|
@ -0,0 +1,85 @@
|
|||
import { readFileSync } from 'node:fs';
|
||||
import { resolve } from 'node:path';
|
||||
|
||||
import { describe, expect, it } from 'vitest';
|
||||
|
||||
import { parseDataset } from '../src';
|
||||
|
||||
const fixtures = resolve(__dirname, 'fixtures');
|
||||
|
||||
describe('parseDataset - CSV', () => {
|
||||
const csv = readFileSync(resolve(fixtures, 'sample.csv'), 'utf-8');
|
||||
|
||||
it('should parse CSV with headers', () => {
|
||||
const result = parseDataset(csv, { format: 'csv' });
|
||||
expect(result.headers).toEqual(['id', 'prompt', 'type', 'answer']);
|
||||
expect(result.totalCount).toBe(3);
|
||||
expect(result.rows).toHaveLength(3);
|
||||
expect(result.rows[0]).toMatchObject({ id: 1, prompt: 'What is 2+2?', type: 'math', answer: 4 });
|
||||
});
|
||||
|
||||
it('should support preview mode', () => {
|
||||
const result = parseDataset(csv, { format: 'csv', preview: 2 });
|
||||
expect(result.rows).toHaveLength(2);
|
||||
expect(result.totalCount).toBe(3);
|
||||
});
|
||||
});
|
||||
|
||||
describe('parseDataset - JSONL', () => {
|
||||
const jsonl = readFileSync(resolve(fixtures, 'sample.jsonl'), 'utf-8');
|
||||
|
||||
it('should parse JSONL', () => {
|
||||
const result = parseDataset(jsonl, { format: 'jsonl' });
|
||||
expect(result.headers).toEqual(['question', 'choices', 'answer']);
|
||||
expect(result.totalCount).toBe(3);
|
||||
expect(result.rows[0]).toMatchObject({
|
||||
answer: 1,
|
||||
choices: ['3', '4', '5', '6'],
|
||||
question: 'What is 2+2?',
|
||||
});
|
||||
});
|
||||
|
||||
it('should support preview mode', () => {
|
||||
const result = parseDataset(jsonl, { format: 'jsonl', preview: 1 });
|
||||
expect(result.rows).toHaveLength(1);
|
||||
expect(result.totalCount).toBe(3);
|
||||
});
|
||||
});
|
||||
|
||||
describe('parseDataset - JSON', () => {
|
||||
const json = readFileSync(resolve(fixtures, 'sample.json'), 'utf-8');
|
||||
|
||||
it('should parse JSON array', () => {
|
||||
const result = parseDataset(json, { format: 'json' });
|
||||
expect(result.headers).toEqual(['input', 'expected', 'tags']);
|
||||
expect(result.totalCount).toBe(3);
|
||||
expect(result.rows[1]).toMatchObject({ expected: 'Paris', input: 'Capital of France?' });
|
||||
});
|
||||
|
||||
it('should support preview mode', () => {
|
||||
const result = parseDataset(json, { format: 'json', preview: 2 });
|
||||
expect(result.rows).toHaveLength(2);
|
||||
expect(result.totalCount).toBe(3);
|
||||
});
|
||||
});
|
||||
|
||||
describe('parseDataset - auto detection', () => {
|
||||
it('should auto-detect CSV by filename', () => {
|
||||
const csv = readFileSync(resolve(fixtures, 'sample.csv'), 'utf-8');
|
||||
const result = parseDataset(csv, { filename: 'sample.csv' });
|
||||
expect(result.format).toBe('csv');
|
||||
expect(result.headers).toContain('prompt');
|
||||
});
|
||||
|
||||
it('should auto-detect JSONL by filename', () => {
|
||||
const jsonl = readFileSync(resolve(fixtures, 'sample.jsonl'), 'utf-8');
|
||||
const result = parseDataset(jsonl, { filename: 'sample.jsonl' });
|
||||
expect(result.format).toBe('jsonl');
|
||||
});
|
||||
|
||||
it('should auto-detect JSON by content', () => {
|
||||
const json = readFileSync(resolve(fixtures, 'sample.json'), 'utf-8');
|
||||
const result = parseDataset(json);
|
||||
expect(result.format).toBe('json');
|
||||
});
|
||||
});
|
||||
33
packages/eval-dataset-parser/package.json
Normal file
33
packages/eval-dataset-parser/package.json
Normal file
|
|
@ -0,0 +1,33 @@
|
|||
{
|
||||
"name": "@lobechat/eval-dataset-parser",
|
||||
"version": "1.0.0",
|
||||
"private": true,
|
||||
"description": "Parse CSV, XLSX, JSON, and JSONL files into structured dataset records",
|
||||
"keywords": ["dataset", "parser", "csv", "xlsx", "jsonl", "lobehub"],
|
||||
"homepage": "https://github.com/lobehub/lobehub/tree/master/packages/eval-dataset-parser",
|
||||
"bugs": {
|
||||
"url": "https://github.com/lobehub/lobehub/issues/new"
|
||||
},
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "https://github.com/lobehub/lobehub.git"
|
||||
},
|
||||
"author": "LobeHub <i@lobehub.com>",
|
||||
"sideEffects": false,
|
||||
"main": "./src/index.ts",
|
||||
"scripts": {
|
||||
"test": "vitest",
|
||||
"test:coverage": "vitest --coverage --silent='passed-only'"
|
||||
},
|
||||
"dependencies": {
|
||||
"papaparse": "^5.5.2",
|
||||
"xlsx": "https://cdn.sheetjs.com/xlsx-0.20.3/xlsx-0.20.3.tgz"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/papaparse": "^5.3.15",
|
||||
"typescript": "^5.9.3"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"typescript": ">=5"
|
||||
}
|
||||
}
|
||||
58
packages/eval-dataset-parser/src/detect.ts
Normal file
58
packages/eval-dataset-parser/src/detect.ts
Normal file
|
|
@ -0,0 +1,58 @@
|
|||
import type { DatasetFormat } from './types';
|
||||
|
||||
const XLSX_MAGIC = [0x50, 0x4b, 0x03, 0x04]; // PK\x03\x04 (ZIP header)
|
||||
|
||||
export function detectFormat(
|
||||
input: Buffer | string | Uint8Array,
|
||||
filename?: string,
|
||||
): DatasetFormat {
|
||||
// 1. Try filename extension
|
||||
if (filename) {
|
||||
const ext = filename.split('.').pop()?.toLowerCase();
|
||||
if (ext === 'csv') return 'csv';
|
||||
if (ext === 'xlsx' || ext === 'xls') return 'xlsx';
|
||||
if (ext === 'jsonl') return 'jsonl';
|
||||
if (ext === 'json') return 'json';
|
||||
}
|
||||
|
||||
// 2. For binary data, check XLSX magic bytes
|
||||
if (input instanceof Uint8Array || Buffer.isBuffer(input)) {
|
||||
const bytes = input instanceof Uint8Array ? input : new Uint8Array(input);
|
||||
if (bytes.length >= 4 && XLSX_MAGIC.every((b, i) => bytes[i] === b)) {
|
||||
return 'xlsx';
|
||||
}
|
||||
// Convert to string for further detection
|
||||
const str = new TextDecoder().decode(bytes);
|
||||
return detectFromString(str);
|
||||
}
|
||||
|
||||
return detectFromString(input as string);
|
||||
}
|
||||
|
||||
function detectFromString(str: string): DatasetFormat {
|
||||
const trimmed = str.trim();
|
||||
|
||||
// Try JSON array
|
||||
if (trimmed.startsWith('[')) {
|
||||
try {
|
||||
JSON.parse(trimmed);
|
||||
return 'json';
|
||||
} catch {
|
||||
// not valid JSON
|
||||
}
|
||||
}
|
||||
|
||||
// Try JSONL (first line is valid JSON object)
|
||||
const firstLine = trimmed.split('\n')[0]?.trim();
|
||||
if (firstLine?.startsWith('{')) {
|
||||
try {
|
||||
JSON.parse(firstLine);
|
||||
return 'jsonl';
|
||||
} catch {
|
||||
// not valid JSONL
|
||||
}
|
||||
}
|
||||
|
||||
// Default to CSV
|
||||
return 'csv';
|
||||
}
|
||||
3
packages/eval-dataset-parser/src/index.ts
Normal file
3
packages/eval-dataset-parser/src/index.ts
Normal file
|
|
@ -0,0 +1,3 @@
|
|||
export { detectFormat } from './detect';
|
||||
export { parseDataset } from './parseDataset';
|
||||
export type { DatasetFormat, ParseOptions, ParseResult } from './types';
|
||||
42
packages/eval-dataset-parser/src/parseDataset.ts
Normal file
42
packages/eval-dataset-parser/src/parseDataset.ts
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
import { detectFormat } from './detect';
|
||||
import { parseCSV, parseJSON, parseJSONL, parseXLSX } from './parsers';
|
||||
import type { ParseOptions, ParseResult } from './types';
|
||||
|
||||
export function parseDataset(
|
||||
input: Buffer | string | Uint8Array,
|
||||
options?: ParseOptions & { filename?: string },
|
||||
): ParseResult {
|
||||
const format =
|
||||
options?.format && options.format !== 'auto'
|
||||
? options.format
|
||||
: detectFormat(input, options?.filename);
|
||||
|
||||
switch (format) {
|
||||
case 'csv': {
|
||||
const content = typeof input === 'string' ? input : new TextDecoder().decode(input);
|
||||
return parseCSV(content, options);
|
||||
}
|
||||
|
||||
case 'xlsx': {
|
||||
if (typeof input === 'string') {
|
||||
throw new Error('XLSX format requires binary input (Buffer or Uint8Array)');
|
||||
}
|
||||
const data = input instanceof Uint8Array ? input : new Uint8Array(input);
|
||||
return parseXLSX(data, options);
|
||||
}
|
||||
|
||||
case 'json': {
|
||||
const content = typeof input === 'string' ? input : new TextDecoder().decode(input);
|
||||
return parseJSON(content, options);
|
||||
}
|
||||
|
||||
case 'jsonl': {
|
||||
const content = typeof input === 'string' ? input : new TextDecoder().decode(input);
|
||||
return parseJSONL(content, options);
|
||||
}
|
||||
|
||||
default: {
|
||||
throw new Error(`Unsupported format: ${format}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
22
packages/eval-dataset-parser/src/parsers/csv.ts
Normal file
22
packages/eval-dataset-parser/src/parsers/csv.ts
Normal file
|
|
@ -0,0 +1,22 @@
|
|||
import * as Papa from 'papaparse';
|
||||
|
||||
import type { ParseOptions, ParseResult } from '../types';
|
||||
|
||||
export function parseCSV(content: string, options?: ParseOptions): ParseResult {
|
||||
const result = Papa.parse<Record<string, any>>(content, {
|
||||
delimiter: options?.csvDelimiter,
|
||||
dynamicTyping: true,
|
||||
header: true,
|
||||
skipEmptyLines: true,
|
||||
});
|
||||
|
||||
const rows = options?.preview ? result.data.slice(0, options.preview) : result.data;
|
||||
const headers = result.meta.fields || [];
|
||||
|
||||
return {
|
||||
format: 'csv',
|
||||
headers,
|
||||
rows,
|
||||
totalCount: result.data.length,
|
||||
};
|
||||
}
|
||||
4
packages/eval-dataset-parser/src/parsers/index.ts
Normal file
4
packages/eval-dataset-parser/src/parsers/index.ts
Normal file
|
|
@ -0,0 +1,4 @@
|
|||
export { parseCSV } from './csv';
|
||||
export { parseJSON } from './json';
|
||||
export { parseJSONL } from './jsonl';
|
||||
export { parseXLSX } from './xlsx';
|
||||
19
packages/eval-dataset-parser/src/parsers/json.ts
Normal file
19
packages/eval-dataset-parser/src/parsers/json.ts
Normal file
|
|
@ -0,0 +1,19 @@
|
|||
import type { ParseOptions, ParseResult } from '../types';
|
||||
|
||||
export function parseJSON(content: string, options?: ParseOptions): ParseResult {
|
||||
const data = JSON.parse(content);
|
||||
|
||||
if (!Array.isArray(data)) {
|
||||
throw new Error('JSON file must contain an array of objects');
|
||||
}
|
||||
|
||||
const headers = Object.keys(data[0] || {});
|
||||
const rows = options?.preview ? data.slice(0, options.preview) : data;
|
||||
|
||||
return {
|
||||
format: 'json',
|
||||
headers,
|
||||
rows,
|
||||
totalCount: data.length,
|
||||
};
|
||||
}
|
||||
28
packages/eval-dataset-parser/src/parsers/jsonl.ts
Normal file
28
packages/eval-dataset-parser/src/parsers/jsonl.ts
Normal file
|
|
@ -0,0 +1,28 @@
|
|||
import type { ParseOptions, ParseResult } from '../types';
|
||||
|
||||
export function parseJSONL(content: string, options?: ParseOptions): ParseResult {
|
||||
const lines = content
|
||||
.split('\n')
|
||||
.map((line) => line.trim())
|
||||
.filter(Boolean);
|
||||
|
||||
const totalCount = lines.length;
|
||||
const linesToParse = options?.preview ? lines.slice(0, options.preview) : lines;
|
||||
|
||||
const rows = linesToParse.map((line, index) => {
|
||||
try {
|
||||
return JSON.parse(line);
|
||||
} catch {
|
||||
throw new Error(`Invalid JSON at line ${index + 1}: ${line.slice(0, 100)}`);
|
||||
}
|
||||
});
|
||||
|
||||
const headers = Object.keys(rows[0] || {});
|
||||
|
||||
return {
|
||||
format: 'jsonl',
|
||||
headers,
|
||||
rows,
|
||||
totalCount,
|
||||
};
|
||||
}
|
||||
41
packages/eval-dataset-parser/src/parsers/xlsx.ts
Normal file
41
packages/eval-dataset-parser/src/parsers/xlsx.ts
Normal file
|
|
@ -0,0 +1,41 @@
|
|||
import * as XLSX from 'xlsx';
|
||||
|
||||
import type { ParseOptions, ParseResult } from '../types';
|
||||
|
||||
export function parseXLSX(
|
||||
data: Buffer | Uint8Array,
|
||||
options?: ParseOptions,
|
||||
): ParseResult {
|
||||
const workbook = XLSX.read(data, { type: 'array' });
|
||||
|
||||
// Select sheet
|
||||
let sheetName: string;
|
||||
if (typeof options?.sheet === 'string') {
|
||||
sheetName = options.sheet;
|
||||
} else if (typeof options?.sheet === 'number') {
|
||||
sheetName = workbook.SheetNames[options.sheet] || workbook.SheetNames[0];
|
||||
} else {
|
||||
sheetName = workbook.SheetNames[0];
|
||||
}
|
||||
|
||||
const worksheet = workbook.Sheets[sheetName];
|
||||
if (!worksheet) {
|
||||
return { format: 'xlsx', headers: [], metadata: { sheetName }, rows: [], totalCount: 0 };
|
||||
}
|
||||
|
||||
const allRows = XLSX.utils.sheet_to_json<Record<string, any>>(worksheet, {
|
||||
defval: '',
|
||||
raw: false,
|
||||
});
|
||||
|
||||
const headers = Object.keys(allRows[0] || {});
|
||||
const rows = options?.preview ? allRows.slice(0, options.preview) : allRows;
|
||||
|
||||
return {
|
||||
format: 'xlsx',
|
||||
headers,
|
||||
metadata: { sheetName },
|
||||
rows,
|
||||
totalCount: allRows.length,
|
||||
};
|
||||
}
|
||||
19
packages/eval-dataset-parser/src/types.ts
Normal file
19
packages/eval-dataset-parser/src/types.ts
Normal file
|
|
@ -0,0 +1,19 @@
|
|||
export type DatasetFormat = 'auto' | 'csv' | 'json' | 'jsonl' | 'xlsx';
|
||||
|
||||
export interface ParseOptions {
|
||||
csvDelimiter?: string;
|
||||
format?: DatasetFormat;
|
||||
headerRow?: number;
|
||||
preview?: number;
|
||||
sheet?: number | string;
|
||||
}
|
||||
|
||||
export interface ParseResult {
|
||||
format: DatasetFormat;
|
||||
headers: string[];
|
||||
metadata?: {
|
||||
sheetName?: string;
|
||||
};
|
||||
rows: Record<string, any>[];
|
||||
totalCount: number;
|
||||
}
|
||||
16
packages/eval-dataset-parser/vitest.config.mts
Normal file
16
packages/eval-dataset-parser/vitest.config.mts
Normal file
|
|
@ -0,0 +1,16 @@
|
|||
import { defineConfig } from 'vitest/config';
|
||||
|
||||
export default defineConfig({
|
||||
test: {
|
||||
coverage: {
|
||||
exclude: [
|
||||
'**/types.ts',
|
||||
'**/*.d.ts',
|
||||
'**/vitest.config.*',
|
||||
'**/node_modules/**',
|
||||
],
|
||||
reporter: ['text', 'json', 'lcov', 'text-summary'],
|
||||
},
|
||||
environment: 'node',
|
||||
},
|
||||
});
|
||||
358
packages/eval-rubric/__tests__/evaluate.test.ts
Normal file
358
packages/eval-rubric/__tests__/evaluate.test.ts
Normal file
|
|
@ -0,0 +1,358 @@
|
|||
import type { EvalBenchmarkRubric, EvalTestCaseContent } from '@lobechat/types';
|
||||
import { describe, expect, it } from 'vitest';
|
||||
|
||||
import { evaluate } from '../src';
|
||||
|
||||
const equalsRubric: EvalBenchmarkRubric = {
|
||||
config: { value: '' },
|
||||
id: 'r1',
|
||||
name: 'Exact Match',
|
||||
type: 'equals',
|
||||
weight: 1,
|
||||
};
|
||||
|
||||
describe('evaluate', () => {
|
||||
it('should pass when actual matches expected', async () => {
|
||||
const testCase: EvalTestCaseContent = { expected: '42', input: 'What is 6*7?' };
|
||||
const result = await evaluate({ actual: '42', rubrics: [equalsRubric], testCase });
|
||||
expect(result.passed).toBe(true);
|
||||
expect(result.score).toBe(1);
|
||||
});
|
||||
|
||||
it('should fail when actual does not match', async () => {
|
||||
const testCase: EvalTestCaseContent = { expected: '42', input: 'What is 6*7?' };
|
||||
const result = await evaluate({ actual: '41', rubrics: [equalsRubric], testCase });
|
||||
expect(result.passed).toBe(false);
|
||||
expect(result.score).toBe(0);
|
||||
});
|
||||
|
||||
it('should handle multi-candidate expected (JSON array)', async () => {
|
||||
const testCase: EvalTestCaseContent = {
|
||||
expected: JSON.stringify(['孙悟空', '悟空', '齐天大圣']),
|
||||
input: '西游记主角是谁?',
|
||||
};
|
||||
const result = await evaluate({ actual: '悟空', rubrics: [equalsRubric], testCase });
|
||||
expect(result.passed).toBe(true);
|
||||
});
|
||||
|
||||
it('should use extractor from options', async () => {
|
||||
const testCase: EvalTestCaseContent = {
|
||||
choices: ['0', '1', '2', '3'],
|
||||
expected: '1',
|
||||
input: 'Find all c in Z_3...',
|
||||
};
|
||||
const result = await evaluate(
|
||||
{ actual: 'The answer is B', rubrics: [equalsRubric], testCase },
|
||||
{
|
||||
extractor: { type: 'choice-index' },
|
||||
},
|
||||
);
|
||||
expect(result.passed).toBe(true);
|
||||
expect(result.score).toBe(1);
|
||||
});
|
||||
|
||||
it('should use extractor from rubric over options', async () => {
|
||||
const rubricWithExtractor: EvalBenchmarkRubric = {
|
||||
...equalsRubric,
|
||||
extractor: { type: 'delimiter', delimiter: '####' },
|
||||
};
|
||||
const testCase: EvalTestCaseContent = { expected: '9', input: 'Calculate...' };
|
||||
const result = await evaluate({
|
||||
actual: 'blah blah #### 9',
|
||||
rubrics: [rubricWithExtractor],
|
||||
testCase,
|
||||
});
|
||||
expect(result.passed).toBe(true);
|
||||
});
|
||||
|
||||
it('should compute weighted score across rubrics', async () => {
|
||||
const rubrics: EvalBenchmarkRubric[] = [
|
||||
{ ...equalsRubric, id: 'r1', weight: 2 },
|
||||
{ ...equalsRubric, id: 'r2', type: 'contains', weight: 1 },
|
||||
];
|
||||
const testCase: EvalTestCaseContent = { expected: '42', input: '...' };
|
||||
// equals fails (actual != expected), contains passes (actual contains '42')
|
||||
const result = await evaluate({ actual: 'The answer is 42', rubrics, testCase });
|
||||
// equals: 0 * 2 = 0, contains: 1 * 1 = 1, total = 1/3 ≈ 0.33
|
||||
expect(result.score).toBeCloseTo(1 / 3, 2);
|
||||
expect(result.passed).toBe(false); // below 0.6 threshold
|
||||
});
|
||||
|
||||
it('should use default contains when no rubrics but expected exists', async () => {
|
||||
const testCase: EvalTestCaseContent = { expected: '42', input: '...' };
|
||||
const result = await evaluate({ actual: 'The answer is 42', rubrics: [], testCase });
|
||||
expect(result.passed).toBe(true);
|
||||
expect(result.score).toBe(1);
|
||||
expect(result.rubricResults).toHaveLength(1);
|
||||
expect(result.rubricResults[0].rubricId).toBe('default-contains');
|
||||
});
|
||||
|
||||
it('should fail with default contains when actual does not contain expected', async () => {
|
||||
const testCase: EvalTestCaseContent = { expected: '42', input: '...' };
|
||||
const result = await evaluate({ actual: 'I have no idea', rubrics: [], testCase });
|
||||
expect(result.passed).toBe(false);
|
||||
expect(result.score).toBe(0);
|
||||
expect(result.rubricResults).toHaveLength(1);
|
||||
expect(result.rubricResults[0].rubricId).toBe('default-contains');
|
||||
});
|
||||
|
||||
it('should return failed with no rubrics and no expected', async () => {
|
||||
const testCase: EvalTestCaseContent = { input: '...' };
|
||||
const result = await evaluate({ actual: '42', rubrics: [], testCase });
|
||||
expect(result.passed).toBe(false);
|
||||
expect(result.rubricResults).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should respect custom passThreshold', async () => {
|
||||
const testCase: EvalTestCaseContent = { expected: '42', input: '...' };
|
||||
const rubrics: EvalBenchmarkRubric[] = [
|
||||
{ ...equalsRubric, id: 'r1', weight: 1 },
|
||||
{ ...equalsRubric, id: 'r2', type: 'contains', weight: 1 },
|
||||
];
|
||||
// equals fails, contains passes → score = 0.5
|
||||
const result = await evaluate(
|
||||
{ actual: 'The answer is 42', rubrics, testCase },
|
||||
{ passThreshold: 0.5 },
|
||||
);
|
||||
expect(result.passed).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('evaluate - MMLU end-to-end', () => {
|
||||
it('should correctly evaluate MMLU-style question', async () => {
|
||||
const testCase: EvalTestCaseContent = {
|
||||
choices: ['0', '1', '2', '3'],
|
||||
expected: '1',
|
||||
input: 'Find all c in Z_3 such that Z_3[x]/(x^2 + c) is a field.',
|
||||
};
|
||||
|
||||
const rubrics: EvalBenchmarkRubric[] = [
|
||||
{
|
||||
config: { value: '' },
|
||||
id: 'mmlu-match',
|
||||
name: 'Choice Match',
|
||||
type: 'equals',
|
||||
weight: 1,
|
||||
},
|
||||
];
|
||||
|
||||
// Agent says "B" → extractor maps to index 1 → matches expected "1"
|
||||
const result = await evaluate(
|
||||
{ actual: 'The answer is B', rubrics, testCase },
|
||||
{ extractor: { type: 'choice-index' }, passThreshold: 0.6 },
|
||||
);
|
||||
|
||||
expect(result.passed).toBe(true);
|
||||
expect(result.score).toBe(1);
|
||||
expect(result.rubricResults[0].passed).toBe(true);
|
||||
});
|
||||
|
||||
it('should fail when agent gives wrong answer', async () => {
|
||||
const testCase: EvalTestCaseContent = {
|
||||
choices: ['0', '1', '2', '3'],
|
||||
expected: '1',
|
||||
input: 'Find all c in Z_3...',
|
||||
};
|
||||
|
||||
const result = await evaluate(
|
||||
{ actual: 'I think the answer is C', rubrics: [equalsRubric], testCase },
|
||||
{ extractor: { type: 'choice-index' } },
|
||||
);
|
||||
|
||||
expect(result.passed).toBe(false); // C → 2, expected 1
|
||||
});
|
||||
|
||||
it('should handle MMLU with verbose reasoning before answer', async () => {
|
||||
const testCase: EvalTestCaseContent = {
|
||||
choices: ['True, True', 'False, False', 'True, False', 'False, True'],
|
||||
expected: '2',
|
||||
input: 'Statement 1 | Every element of a group generates a cyclic subgroup...',
|
||||
};
|
||||
|
||||
const result = await evaluate(
|
||||
{
|
||||
actual:
|
||||
'Let me think step by step.\nStatement 1 is true because...\nStatement 2 is false because S_10 has 10! elements.\nTherefore the answer is C.',
|
||||
rubrics: [equalsRubric],
|
||||
testCase,
|
||||
},
|
||||
{ extractor: { type: 'choice-index' } },
|
||||
);
|
||||
|
||||
expect(result.passed).toBe(true);
|
||||
expect(result.score).toBe(1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('evaluate - GSM8K end-to-end', () => {
|
||||
const numericRubric: EvalBenchmarkRubric = {
|
||||
config: { tolerance: 0.01, value: 0 },
|
||||
id: 'gsm8k-numeric',
|
||||
name: 'Numeric Match',
|
||||
type: 'numeric',
|
||||
weight: 1,
|
||||
};
|
||||
|
||||
it('should extract answer after #### delimiter and match numerically', async () => {
|
||||
const testCase: EvalTestCaseContent = {
|
||||
expected: '9',
|
||||
input: 'Janet sells 16-3-4=<<16-3-4=9>>9 duck eggs. How many?',
|
||||
};
|
||||
|
||||
const result = await evaluate({
|
||||
actual:
|
||||
'Janet has 16 eggs. She eats 3 and bakes 4. So 16-3-4=9 eggs remain.\n\nThe answer is 9.',
|
||||
rubrics: [
|
||||
{
|
||||
...numericRubric,
|
||||
extractor: { type: 'last-line' },
|
||||
},
|
||||
],
|
||||
testCase,
|
||||
});
|
||||
|
||||
expect(result.passed).toBe(true);
|
||||
});
|
||||
|
||||
it('should handle GSM8K delimiter extraction', async () => {
|
||||
const testCase: EvalTestCaseContent = {
|
||||
expected: '42',
|
||||
input: 'A store sells...',
|
||||
};
|
||||
|
||||
const result = await evaluate({
|
||||
actual: 'First we calculate... then we add... #### 42',
|
||||
rubrics: [
|
||||
{
|
||||
...numericRubric,
|
||||
extractor: { type: 'delimiter', delimiter: '####' },
|
||||
},
|
||||
],
|
||||
testCase,
|
||||
});
|
||||
|
||||
expect(result.passed).toBe(true);
|
||||
});
|
||||
|
||||
it('should handle decimal tolerance', async () => {
|
||||
const testCase: EvalTestCaseContent = {
|
||||
expected: '3.14',
|
||||
input: 'What is pi to 2 decimal places?',
|
||||
};
|
||||
|
||||
const result = await evaluate({
|
||||
actual: '3.14159',
|
||||
rubrics: [{ ...numericRubric, config: { tolerance: 0.01, value: 3.14 } }],
|
||||
testCase,
|
||||
});
|
||||
|
||||
expect(result.passed).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('evaluate - browsecomp-zh / xbench style', () => {
|
||||
it('should match with contains for short answer in long output', async () => {
|
||||
const containsRubric: EvalBenchmarkRubric = {
|
||||
config: { value: '' },
|
||||
id: 'contains-match',
|
||||
name: 'Contains Match',
|
||||
type: 'contains',
|
||||
weight: 1,
|
||||
};
|
||||
const testCase: EvalTestCaseContent = {
|
||||
expected: '161.27元',
|
||||
input: '某产品的价格是多少?',
|
||||
};
|
||||
|
||||
const result = await evaluate({
|
||||
actual: '根据查询结果,该产品的售价为161.27元,目前有货。',
|
||||
rubrics: [containsRubric],
|
||||
testCase,
|
||||
});
|
||||
|
||||
expect(result.passed).toBe(true);
|
||||
});
|
||||
|
||||
it('should handle multi-candidate Chinese answers', async () => {
|
||||
const testCase: EvalTestCaseContent = {
|
||||
expected: JSON.stringify(['孙悟空', '悟空', '齐天大圣', '美猴王']),
|
||||
input: '西游记中大闹天宫的是谁?',
|
||||
};
|
||||
|
||||
// Test with different valid answers
|
||||
expect((await evaluate({ actual: '齐天大圣', rubrics: [equalsRubric], testCase })).passed).toBe(
|
||||
true,
|
||||
);
|
||||
expect((await evaluate({ actual: '美猴王', rubrics: [equalsRubric], testCase })).passed).toBe(
|
||||
true,
|
||||
);
|
||||
expect((await evaluate({ actual: '猪八戒', rubrics: [equalsRubric], testCase })).passed).toBe(
|
||||
false,
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle xbench style with single round answer', async () => {
|
||||
const testCase: EvalTestCaseContent = {
|
||||
expected: '1轮',
|
||||
input: '某比赛第几轮?',
|
||||
};
|
||||
|
||||
const result = await evaluate({ actual: '1轮', rubrics: [equalsRubric], testCase });
|
||||
expect(result.passed).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('evaluate - edge cases', () => {
|
||||
it('should handle empty actual output', async () => {
|
||||
const testCase: EvalTestCaseContent = { expected: '42', input: '...' };
|
||||
const result = await evaluate({ actual: '', rubrics: [equalsRubric], testCase });
|
||||
expect(result.passed).toBe(false);
|
||||
});
|
||||
|
||||
it('should handle undefined expected', async () => {
|
||||
const testCase: EvalTestCaseContent = { input: '...' };
|
||||
const result = await evaluate({ actual: 'anything', rubrics: [equalsRubric], testCase });
|
||||
// empty string vs 'anything' → fails
|
||||
expect(result.passed).toBe(false);
|
||||
});
|
||||
|
||||
it('should handle whitespace-only output with extractor', async () => {
|
||||
const testCase: EvalTestCaseContent = { expected: '1', input: '...' };
|
||||
const result = await evaluate(
|
||||
{ actual: ' \n \n ', rubrics: [equalsRubric], testCase },
|
||||
{ extractor: { type: 'last-line' } },
|
||||
);
|
||||
expect(result.passed).toBe(false);
|
||||
});
|
||||
|
||||
it('should handle multiple rubrics with different extractors', async () => {
|
||||
const rubrics: EvalBenchmarkRubric[] = [
|
||||
{
|
||||
config: { value: '' },
|
||||
extractor: { type: 'choice-index' },
|
||||
id: 'choice',
|
||||
name: 'Choice',
|
||||
type: 'equals',
|
||||
weight: 1,
|
||||
},
|
||||
{
|
||||
config: { value: '' },
|
||||
id: 'raw-contains',
|
||||
name: 'Raw Contains',
|
||||
type: 'contains',
|
||||
weight: 1,
|
||||
},
|
||||
];
|
||||
const testCase: EvalTestCaseContent = {
|
||||
expected: '1',
|
||||
input: '...',
|
||||
};
|
||||
|
||||
// "B" → choice-index extracts "1" → equals "1" ✓
|
||||
// raw output "The answer is B" contains "1"? No → ✗
|
||||
const result = await evaluate({ actual: 'The answer is B', rubrics, testCase });
|
||||
expect(result.score).toBeCloseTo(0.5, 2);
|
||||
expect(result.rubricResults[0].passed).toBe(true);
|
||||
expect(result.rubricResults[1].passed).toBe(false);
|
||||
});
|
||||
});
|
||||
65
packages/eval-rubric/__tests__/extractors.test.ts
Normal file
65
packages/eval-rubric/__tests__/extractors.test.ts
Normal file
|
|
@ -0,0 +1,65 @@
|
|||
import { describe, expect, it } from 'vitest';
|
||||
|
||||
import { extract } from '../src';
|
||||
|
||||
describe('extract - regex', () => {
|
||||
it('should extract with capture group', () => {
|
||||
expect(extract('The answer is B.', { type: 'regex', pattern: '([A-D])' })).toBe('B');
|
||||
});
|
||||
|
||||
it('should return full match if no capture group', () => {
|
||||
expect(extract('42', { type: 'regex', pattern: '\\d+', group: 0 })).toBe('42');
|
||||
});
|
||||
|
||||
it('should return original output if no match', () => {
|
||||
expect(extract('no match here', { type: 'regex', pattern: '\\d+' })).toBe('no match here');
|
||||
});
|
||||
});
|
||||
|
||||
describe('extract - delimiter', () => {
|
||||
it('should extract after delimiter (last segment)', () => {
|
||||
expect(
|
||||
extract('Step 1... Step 2... #### 42', { type: 'delimiter', delimiter: '####' }),
|
||||
).toBe('42');
|
||||
});
|
||||
|
||||
it('should extract first segment after delimiter', () => {
|
||||
expect(
|
||||
extract('a|b|c', { type: 'delimiter', delimiter: '|', position: 'first' }),
|
||||
).toBe('b');
|
||||
});
|
||||
|
||||
it('should return original if delimiter not found', () => {
|
||||
expect(extract('no delimiter', { type: 'delimiter', delimiter: '####' })).toBe('no delimiter');
|
||||
});
|
||||
});
|
||||
|
||||
describe('extract - last-line', () => {
|
||||
it('should extract last non-empty line', () => {
|
||||
expect(extract('line 1\nline 2\nthe answer\n', { type: 'last-line' })).toBe('the answer');
|
||||
});
|
||||
|
||||
it('should trim by default', () => {
|
||||
expect(extract('first\n second ', { type: 'last-line' })).toBe('second');
|
||||
});
|
||||
});
|
||||
|
||||
describe('extract - choice-index', () => {
|
||||
it('should map letter to index with default labels', () => {
|
||||
expect(extract('The answer is C', { type: 'choice-index' })).toBe('2');
|
||||
});
|
||||
|
||||
it('should map B to 1', () => {
|
||||
expect(extract('B', { type: 'choice-index' })).toBe('1');
|
||||
});
|
||||
|
||||
it('should use custom labels', () => {
|
||||
expect(
|
||||
extract('Answer: 2', { type: 'choice-index', labels: ['1', '2', '3', '4'], pattern: '[1-4]' }),
|
||||
).toBe('1');
|
||||
});
|
||||
|
||||
it('should return original if no letter found', () => {
|
||||
expect(extract('I think so', { type: 'choice-index' })).toBe('I think so');
|
||||
});
|
||||
});
|
||||
38
packages/eval-rubric/package.json
Normal file
38
packages/eval-rubric/package.json
Normal file
|
|
@ -0,0 +1,38 @@
|
|||
{
|
||||
"name": "@lobechat/eval-rubric",
|
||||
"version": "1.0.0",
|
||||
"private": true,
|
||||
"description": "Rubric evaluator engine for agent evaluation benchmarks",
|
||||
"keywords": [
|
||||
"eval",
|
||||
"rubric",
|
||||
"evaluator",
|
||||
"benchmark",
|
||||
"lobehub"
|
||||
],
|
||||
"homepage": "https://github.com/lobehub/lobehub/tree/master/packages/eval-rubric",
|
||||
"bugs": {
|
||||
"url": "https://github.com/lobehub/lobehub/issues/new"
|
||||
},
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "https://github.com/lobehub/lobehub.git"
|
||||
},
|
||||
"author": "LobeHub <i@lobehub.com>",
|
||||
"sideEffects": false,
|
||||
"main": "./src/index.ts",
|
||||
"scripts": {
|
||||
"test": "vitest",
|
||||
"test:coverage": "vitest --coverage --silent='passed-only'"
|
||||
},
|
||||
"dependencies": {
|
||||
"@lobechat/types": "workspace:*",
|
||||
"ajv": "^8.17.1"
|
||||
},
|
||||
"devDependencies": {
|
||||
"typescript": "^5.9.3"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"typescript": ">=5"
|
||||
}
|
||||
}
|
||||
127
packages/eval-rubric/src/evaluate.ts
Normal file
127
packages/eval-rubric/src/evaluate.ts
Normal file
|
|
@ -0,0 +1,127 @@
|
|||
import type { AnswerExtractor, EvalBenchmarkRubric, EvalTestCaseContent } from '@lobechat/types';
|
||||
|
||||
import { extract } from './extractors';
|
||||
import { match, type MatchContext, type MatchResult } from './matchers';
|
||||
|
||||
export interface EvaluateResult {
|
||||
passed: boolean;
|
||||
reason?: string;
|
||||
rubricResults: RubricResult[];
|
||||
score: number;
|
||||
}
|
||||
|
||||
export interface RubricResult {
|
||||
passed: boolean;
|
||||
reason?: string;
|
||||
rubricId: string;
|
||||
score: number;
|
||||
}
|
||||
|
||||
export interface EvaluateOptions {
|
||||
/**
|
||||
* Default extractor applied before matching (benchmark-level)
|
||||
*/
|
||||
extractor?: AnswerExtractor;
|
||||
/**
|
||||
* Context for LLM-based rubrics, passed through to match()
|
||||
*/
|
||||
matchContext?: MatchContext;
|
||||
/**
|
||||
* Pass threshold for overall score
|
||||
* @default 0.6
|
||||
*/
|
||||
passThreshold?: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Evaluate agent output against a test case using one or more rubrics.
|
||||
*
|
||||
* Flow:
|
||||
* 1. For each rubric, optionally extract answer from output
|
||||
* 2. If expected is a JSON array string, try any-of matching
|
||||
* 3. Run the rubric matcher
|
||||
* 4. Compute weighted score
|
||||
*/
|
||||
export const evaluate = async (
|
||||
params: { actual: string; rubrics: EvalBenchmarkRubric[]; testCase: EvalTestCaseContent },
|
||||
options: EvaluateOptions = {},
|
||||
): Promise<EvaluateResult> => {
|
||||
const { actual: actualOutput, rubrics: inputRubrics, testCase } = params;
|
||||
const { passThreshold = 0.6, matchContext } = options;
|
||||
|
||||
let rubrics = inputRubrics;
|
||||
|
||||
if (!rubrics || rubrics.length === 0) {
|
||||
if (testCase.expected) {
|
||||
rubrics = [
|
||||
{
|
||||
config: {} as any,
|
||||
id: 'default-contains',
|
||||
name: 'Default Contains',
|
||||
type: 'contains',
|
||||
weight: 1,
|
||||
},
|
||||
];
|
||||
} else {
|
||||
return { passed: false, reason: 'No rubrics configured', rubricResults: [], score: 0 };
|
||||
}
|
||||
}
|
||||
|
||||
const rubricResults: RubricResult[] = [];
|
||||
let totalWeight = 0;
|
||||
let weightedScore = 0;
|
||||
|
||||
for (const rubric of rubrics) {
|
||||
// Step 1: Extract answer if extractor is configured
|
||||
const extractor = rubric.extractor ?? options.extractor;
|
||||
const extracted = extractor ? extract(actualOutput, extractor) : actualOutput;
|
||||
|
||||
// Step 2: Resolve expected value
|
||||
const expected = testCase.expected;
|
||||
|
||||
// Step 3: Handle multi-candidate (JSON array string in expected)
|
||||
let result: MatchResult;
|
||||
|
||||
if (rubric.type !== 'any-of' && expected && isJsonArray(expected)) {
|
||||
// Auto any-of: try each candidate
|
||||
const candidates: string[] = JSON.parse(expected);
|
||||
const results: MatchResult[] = [];
|
||||
for (const c of candidates) {
|
||||
results.push(await match({ actual: extracted, expected: c, rubric }, matchContext));
|
||||
}
|
||||
const best = results.reduce((a, b) => (a.score >= b.score ? a : b));
|
||||
result = best;
|
||||
} else {
|
||||
result = await match({ actual: extracted, expected, rubric }, matchContext);
|
||||
}
|
||||
|
||||
rubricResults.push({
|
||||
passed: result.passed,
|
||||
reason: result.reason,
|
||||
rubricId: rubric.id,
|
||||
score: result.score,
|
||||
});
|
||||
|
||||
totalWeight += rubric.weight;
|
||||
weightedScore += result.score * rubric.weight;
|
||||
}
|
||||
|
||||
const score = totalWeight > 0 ? weightedScore / totalWeight : 0;
|
||||
const passed = score >= passThreshold;
|
||||
|
||||
return {
|
||||
passed,
|
||||
rubricResults,
|
||||
score,
|
||||
};
|
||||
};
|
||||
|
||||
function isJsonArray(s: string): boolean {
|
||||
if (!s.startsWith('[')) return false;
|
||||
try {
|
||||
const parsed = JSON.parse(s);
|
||||
return Array.isArray(parsed);
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
47
packages/eval-rubric/src/extractors.ts
Normal file
47
packages/eval-rubric/src/extractors.ts
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
import type { AnswerExtractor } from '@lobechat/types';
|
||||
|
||||
/**
|
||||
* Extract answer from raw agent output using the configured extractor
|
||||
*/
|
||||
export const extract = (output: string, extractor: AnswerExtractor): string => {
|
||||
switch (extractor.type) {
|
||||
case 'regex': {
|
||||
const match = new RegExp(extractor.pattern).exec(output);
|
||||
if (!match) return output;
|
||||
const group = extractor.group ?? 1;
|
||||
return match[group] ?? match[0];
|
||||
}
|
||||
|
||||
case 'delimiter': {
|
||||
const parts = output.split(extractor.delimiter);
|
||||
if (parts.length < 2) return output;
|
||||
const segment =
|
||||
extractor.position === 'first' ? parts[1] : parts[parts.length - 1];
|
||||
return segment.trim();
|
||||
}
|
||||
|
||||
case 'last-line': {
|
||||
const lines = output.split('\n').filter((l) => l.trim());
|
||||
if (lines.length === 0) return output;
|
||||
const last = lines[lines.length - 1];
|
||||
return extractor.trim !== false ? last.trim() : last;
|
||||
}
|
||||
|
||||
case 'choice-index': {
|
||||
const labels = extractor.labels ?? ['A', 'B', 'C', 'D'];
|
||||
// Default pattern: match a standalone choice label (word boundary)
|
||||
const pattern = extractor.pattern ?? `\\b([${labels.join('')}])\\b`;
|
||||
// Try all matches and pick the last one (most likely the actual answer)
|
||||
const regex = new RegExp(pattern, 'gi');
|
||||
let lastMatch: RegExpExecArray | null = null;
|
||||
let m: RegExpExecArray | null;
|
||||
while ((m = regex.exec(output)) !== null) {
|
||||
lastMatch = m;
|
||||
}
|
||||
if (!lastMatch) return output;
|
||||
const letter = (lastMatch[1] ?? lastMatch[0]).toUpperCase();
|
||||
const idx = labels.indexOf(letter);
|
||||
return idx >= 0 ? String(idx) : output;
|
||||
}
|
||||
}
|
||||
};
|
||||
6
packages/eval-rubric/src/index.ts
Normal file
6
packages/eval-rubric/src/index.ts
Normal file
|
|
@ -0,0 +1,6 @@
|
|||
export type { EvaluateOptions, EvaluateResult, RubricResult } from './evaluate';
|
||||
export { evaluate } from './evaluate';
|
||||
export { extract } from './extractors';
|
||||
export type { GenerateObjectPayload, MatchContext, MatchResult } from './matchers';
|
||||
export { match } from './matchers';
|
||||
export { normalize } from './normalize';
|
||||
19
packages/eval-rubric/src/matchers/__tests__/anyOf.test.ts
Normal file
19
packages/eval-rubric/src/matchers/__tests__/anyOf.test.ts
Normal file
|
|
@ -0,0 +1,19 @@
|
|||
import { describe, expect, it } from 'vitest';
|
||||
|
||||
import { matchAnyOf } from '../anyOf';
|
||||
|
||||
describe('matchAnyOf', () => {
|
||||
it('should pass when matching any candidate', () => {
|
||||
expect(matchAnyOf('Dog', { values: ['cat', 'dog', 'bird'] } as any).passed).toBe(true);
|
||||
});
|
||||
|
||||
it('should fail when none match', () => {
|
||||
expect(matchAnyOf('fish', { values: ['cat', 'dog'] } as any).passed).toBe(false);
|
||||
});
|
||||
|
||||
it('should respect caseSensitive flag', () => {
|
||||
const config = { caseSensitive: true, values: ['Dog'] } as any;
|
||||
expect(matchAnyOf('dog', config).passed).toBe(false);
|
||||
expect(matchAnyOf('Dog', config).passed).toBe(true);
|
||||
});
|
||||
});
|
||||
13
packages/eval-rubric/src/matchers/__tests__/contains.test.ts
Normal file
13
packages/eval-rubric/src/matchers/__tests__/contains.test.ts
Normal file
|
|
@ -0,0 +1,13 @@
|
|||
import { describe, expect, it } from 'vitest';
|
||||
|
||||
import { matchContains } from '../contains';
|
||||
|
||||
describe('matchContains', () => {
|
||||
it('should pass when actual contains expected', () => {
|
||||
expect(matchContains('The answer is 42', '42').passed).toBe(true);
|
||||
});
|
||||
|
||||
it('should fail when not contained', () => {
|
||||
expect(matchContains('no match', '42').passed).toBe(false);
|
||||
});
|
||||
});
|
||||
13
packages/eval-rubric/src/matchers/__tests__/endsWith.test.ts
Normal file
13
packages/eval-rubric/src/matchers/__tests__/endsWith.test.ts
Normal file
|
|
@ -0,0 +1,13 @@
|
|||
import { describe, expect, it } from 'vitest';
|
||||
|
||||
import { matchEndsWith } from '../endsWith';
|
||||
|
||||
describe('matchEndsWith', () => {
|
||||
it('should pass when ends with expected', () => {
|
||||
expect(matchEndsWith('Hello world', 'world').passed).toBe(true);
|
||||
});
|
||||
|
||||
it('should fail when not ending with expected', () => {
|
||||
expect(matchEndsWith('Hello world', 'hello').passed).toBe(false);
|
||||
});
|
||||
});
|
||||
17
packages/eval-rubric/src/matchers/__tests__/equals.test.ts
Normal file
17
packages/eval-rubric/src/matchers/__tests__/equals.test.ts
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
import { describe, expect, it } from 'vitest';
|
||||
|
||||
import { matchEquals } from '../equals';
|
||||
|
||||
describe('matchEquals', () => {
|
||||
it('should pass on exact match (case-insensitive)', () => {
|
||||
expect(matchEquals('Hello', 'hello').passed).toBe(true);
|
||||
});
|
||||
|
||||
it('should fail on mismatch', () => {
|
||||
expect(matchEquals('Hello', 'world').passed).toBe(false);
|
||||
});
|
||||
|
||||
it('should trim whitespace', () => {
|
||||
expect(matchEquals(' answer ', 'answer').passed).toBe(true);
|
||||
});
|
||||
});
|
||||
|
|
@ -0,0 +1,31 @@
|
|||
import { describe, expect, it } from 'vitest';
|
||||
|
||||
import { matchJsonSchema } from '../jsonSchema';
|
||||
|
||||
const schema = {
|
||||
properties: { age: { type: 'number' }, name: { type: 'string' } },
|
||||
required: ['name'],
|
||||
type: 'object',
|
||||
};
|
||||
|
||||
describe('matchJsonSchema', () => {
|
||||
it('should pass when JSON matches schema', () => {
|
||||
const result = matchJsonSchema('{"name":"Alice","age":30}', { schema } as any);
|
||||
expect(result.passed).toBe(true);
|
||||
expect(result.score).toBe(1);
|
||||
});
|
||||
|
||||
it('should fail when JSON does not match schema', () => {
|
||||
const result = matchJsonSchema('{"age":"not a number"}', { schema } as any);
|
||||
expect(result.passed).toBe(false);
|
||||
expect(result.score).toBe(0);
|
||||
expect(result.reason).toBeDefined();
|
||||
});
|
||||
|
||||
it('should fail when output is not valid JSON', () => {
|
||||
const result = matchJsonSchema('not json at all', { schema } as any);
|
||||
expect(result.passed).toBe(false);
|
||||
expect(result.score).toBe(0);
|
||||
expect(result.reason).toBe('Output is not valid JSON');
|
||||
});
|
||||
});
|
||||
|
|
@ -0,0 +1,24 @@
|
|||
import { describe, expect, it } from 'vitest';
|
||||
|
||||
import { matchLevenshtein } from '../levenshtein';
|
||||
|
||||
describe('matchLevenshtein', () => {
|
||||
it('should pass for similar strings', () => {
|
||||
expect(matchLevenshtein('hello', 'helo', { threshold: 0.7 } as any).passed).toBe(true);
|
||||
});
|
||||
|
||||
it('should fail for dissimilar strings', () => {
|
||||
expect(matchLevenshtein('hello', 'world', { threshold: 0.9 } as any).passed).toBe(false);
|
||||
});
|
||||
|
||||
it('should return similarity score', () => {
|
||||
const result = matchLevenshtein('abc', 'abc', { threshold: 0 } as any);
|
||||
expect(result.score).toBe(1);
|
||||
});
|
||||
|
||||
it('should handle empty strings', () => {
|
||||
const result = matchLevenshtein('', '', { threshold: 0.8 } as any);
|
||||
expect(result.score).toBe(1);
|
||||
expect(result.passed).toBe(true);
|
||||
});
|
||||
});
|
||||
196
packages/eval-rubric/src/matchers/__tests__/llmRubric.test.ts
Normal file
196
packages/eval-rubric/src/matchers/__tests__/llmRubric.test.ts
Normal file
|
|
@ -0,0 +1,196 @@
|
|||
import type { EvalBenchmarkRubric } from '@lobechat/types';
|
||||
import { beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { matchLLMRubric } from '../llmRubric';
|
||||
import type { GenerateObjectPayload, MatchContext } from '../types';
|
||||
|
||||
const rubric = (
|
||||
config: any = {},
|
||||
overrides?: Partial<EvalBenchmarkRubric>,
|
||||
): EvalBenchmarkRubric => ({
|
||||
config,
|
||||
id: 'test',
|
||||
name: 'test',
|
||||
type: 'llm-rubric',
|
||||
weight: 1,
|
||||
...overrides,
|
||||
});
|
||||
|
||||
describe('matchLLMRubric', () => {
|
||||
const mockGenerateObject =
|
||||
vi.fn<(payload: GenerateObjectPayload) => Promise<{ reason: string; score: number }>>();
|
||||
|
||||
const context: MatchContext = {
|
||||
generateObject: mockGenerateObject,
|
||||
judgeModel: 'gpt-4o',
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
mockGenerateObject.mockReset();
|
||||
});
|
||||
|
||||
it('should pass when LLM returns high score', async () => {
|
||||
mockGenerateObject.mockResolvedValue({ reason: 'Output is correct', score: 0.9 });
|
||||
|
||||
const result = await matchLLMRubric(
|
||||
'Paris',
|
||||
'Paris',
|
||||
rubric({ criteria: 'Is the answer correct?' }),
|
||||
context,
|
||||
);
|
||||
|
||||
expect(result.passed).toBe(true);
|
||||
expect(result.score).toBe(0.9);
|
||||
expect(result.reason).toBe('Output is correct');
|
||||
});
|
||||
|
||||
it('should fail when LLM returns low score', async () => {
|
||||
mockGenerateObject.mockResolvedValue({ reason: 'Output is wrong', score: 0.2 });
|
||||
|
||||
const result = await matchLLMRubric(
|
||||
'London',
|
||||
'Paris',
|
||||
rubric({ criteria: 'Is the answer correct?' }),
|
||||
context,
|
||||
);
|
||||
|
||||
expect(result.passed).toBe(false);
|
||||
expect(result.score).toBe(0.2);
|
||||
expect(result.reason).toBe('Output is wrong');
|
||||
});
|
||||
|
||||
it('should respect custom threshold from rubric', async () => {
|
||||
mockGenerateObject.mockResolvedValue({ reason: 'Partially correct', score: 0.5 });
|
||||
|
||||
const result = await matchLLMRubric(
|
||||
'answer',
|
||||
undefined,
|
||||
rubric({ criteria: 'Check correctness' }, { threshold: 0.4 }),
|
||||
context,
|
||||
);
|
||||
|
||||
expect(result.passed).toBe(true);
|
||||
expect(result.score).toBe(0.5);
|
||||
});
|
||||
|
||||
it('should clamp score to [0, 1]', async () => {
|
||||
mockGenerateObject.mockResolvedValue({ reason: 'overflow', score: 1.5 });
|
||||
|
||||
const result = await matchLLMRubric('x', undefined, rubric({ criteria: 'test' }), context);
|
||||
|
||||
expect(result.score).toBe(1);
|
||||
});
|
||||
|
||||
it('should return score 0 when generateObject is not available', async () => {
|
||||
const result = await matchLLMRubric('x', undefined, rubric({ criteria: 'test' }));
|
||||
|
||||
expect(result.passed).toBe(false);
|
||||
expect(result.score).toBe(0);
|
||||
expect(result.reason).toBe('LLM judge not available');
|
||||
});
|
||||
|
||||
it('should handle LLM call failure gracefully', async () => {
|
||||
mockGenerateObject.mockRejectedValue(new Error('API timeout'));
|
||||
|
||||
const result = await matchLLMRubric('x', undefined, rubric({ criteria: 'test' }), context);
|
||||
|
||||
expect(result.passed).toBe(false);
|
||||
expect(result.score).toBe(0);
|
||||
expect(result.reason).toBe('LLM judge failed: API timeout');
|
||||
});
|
||||
|
||||
it('should use rubric config model/provider over context judgeModel', async () => {
|
||||
mockGenerateObject.mockResolvedValue({ reason: 'ok', score: 1 });
|
||||
|
||||
await matchLLMRubric(
|
||||
'x',
|
||||
undefined,
|
||||
rubric({
|
||||
criteria: 'test',
|
||||
model: 'claude-sonnet-4-20250514',
|
||||
provider: 'anthropic',
|
||||
}),
|
||||
context,
|
||||
);
|
||||
|
||||
expect(mockGenerateObject).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
model: 'claude-sonnet-4-20250514',
|
||||
provider: 'anthropic',
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should fallback to context.judgeModel when rubric config has no model', async () => {
|
||||
mockGenerateObject.mockResolvedValue({ reason: 'ok', score: 1 });
|
||||
|
||||
await matchLLMRubric('x', undefined, rubric({ criteria: 'test' }), context);
|
||||
|
||||
expect(mockGenerateObject).toHaveBeenCalledWith(expect.objectContaining({ model: 'gpt-4o' }));
|
||||
});
|
||||
|
||||
it('should return score 0 when no judge model configured', async () => {
|
||||
const result = await matchLLMRubric('x', undefined, rubric({ criteria: 'test' }), {
|
||||
generateObject: mockGenerateObject,
|
||||
});
|
||||
|
||||
expect(result.passed).toBe(false);
|
||||
expect(result.score).toBe(0);
|
||||
expect(result.reason).toBe('No judge model configured');
|
||||
expect(mockGenerateObject).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should include expected in user prompt when provided', async () => {
|
||||
mockGenerateObject.mockResolvedValue({ reason: 'ok', score: 1 });
|
||||
|
||||
await matchLLMRubric('Paris', 'Paris', rubric({ criteria: 'Check answer' }), context);
|
||||
|
||||
const payload = mockGenerateObject.mock.calls[0][0];
|
||||
const userMsg = payload.messages.find((m) => m.role === 'user')!;
|
||||
expect(userMsg.content).toContain('[Expected]');
|
||||
expect(userMsg.content).toContain('Paris');
|
||||
});
|
||||
|
||||
it('should omit expected section when not provided', async () => {
|
||||
mockGenerateObject.mockResolvedValue({ reason: 'ok', score: 1 });
|
||||
|
||||
await matchLLMRubric(
|
||||
'some output',
|
||||
undefined,
|
||||
rubric({ criteria: 'Is this helpful?' }),
|
||||
context,
|
||||
);
|
||||
|
||||
const payload = mockGenerateObject.mock.calls[0][0];
|
||||
const userMsg = payload.messages.find((m) => m.role === 'user')!;
|
||||
expect(userMsg.content).not.toContain('[Expected]');
|
||||
expect(userMsg.content).toContain('[Criteria]');
|
||||
expect(userMsg.content).toContain('[Output]');
|
||||
});
|
||||
|
||||
it('should use custom systemRole from rubric config', async () => {
|
||||
mockGenerateObject.mockResolvedValue({ reason: 'ok', score: 1 });
|
||||
const customSystemRole = 'You are a code review expert. Score code quality from 0 to 1.';
|
||||
|
||||
await matchLLMRubric(
|
||||
'function add(a, b) { return a + b; }',
|
||||
undefined,
|
||||
rubric({ criteria: 'Is the code clean?', systemRole: customSystemRole }),
|
||||
context,
|
||||
);
|
||||
|
||||
const payload = mockGenerateObject.mock.calls[0][0];
|
||||
const systemMsg = payload.messages.find((m) => m.role === 'system')!;
|
||||
expect(systemMsg.content).toBe(customSystemRole);
|
||||
});
|
||||
|
||||
it('should use default systemRole when not configured', async () => {
|
||||
mockGenerateObject.mockResolvedValue({ reason: 'ok', score: 1 });
|
||||
|
||||
await matchLLMRubric('x', undefined, rubric({ criteria: 'test' }), context);
|
||||
|
||||
const payload = mockGenerateObject.mock.calls[0][0];
|
||||
const systemMsg = payload.messages.find((m) => m.role === 'system')!;
|
||||
expect(systemMsg.content).toContain('expert evaluation judge');
|
||||
});
|
||||
});
|
||||
25
packages/eval-rubric/src/matchers/__tests__/numeric.test.ts
Normal file
25
packages/eval-rubric/src/matchers/__tests__/numeric.test.ts
Normal file
|
|
@ -0,0 +1,25 @@
|
|||
import { describe, expect, it } from 'vitest';
|
||||
|
||||
import { matchNumeric } from '../numeric';
|
||||
|
||||
describe('matchNumeric', () => {
|
||||
it('should pass within tolerance', () => {
|
||||
expect(matchNumeric('42.3', '42', { tolerance: 0.5, value: 42 } as any).passed).toBe(true);
|
||||
});
|
||||
|
||||
it('should fail outside tolerance', () => {
|
||||
expect(matchNumeric('43', '42', { tolerance: 0.01, value: 42 } as any).passed).toBe(false);
|
||||
});
|
||||
|
||||
it('should extract number from text', () => {
|
||||
expect(
|
||||
matchNumeric('The answer is $9.00', '9', { tolerance: 0.01, value: 9 } as any).passed,
|
||||
).toBe(true);
|
||||
});
|
||||
|
||||
it('should return error when cannot parse number', () => {
|
||||
const result = matchNumeric('no number here', undefined, { value: 42 } as any);
|
||||
expect(result.passed).toBe(false);
|
||||
expect(result.reason).toContain('Could not parse number');
|
||||
});
|
||||
});
|
||||
13
packages/eval-rubric/src/matchers/__tests__/regex.test.ts
Normal file
13
packages/eval-rubric/src/matchers/__tests__/regex.test.ts
Normal file
|
|
@ -0,0 +1,13 @@
|
|||
import { describe, expect, it } from 'vitest';
|
||||
|
||||
import { matchRegex } from '../regex';
|
||||
|
||||
describe('matchRegex', () => {
|
||||
it('should pass when pattern matches', () => {
|
||||
expect(matchRegex('answer: 42', { pattern: '\\d+' } as any).passed).toBe(true);
|
||||
});
|
||||
|
||||
it('should fail when no match', () => {
|
||||
expect(matchRegex('no numbers', { pattern: '\\d+' } as any).passed).toBe(false);
|
||||
});
|
||||
});
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
import { describe, expect, it } from 'vitest';
|
||||
|
||||
import { matchStartsWith } from '../startsWith';
|
||||
|
||||
describe('matchStartsWith', () => {
|
||||
it('should pass when starts with expected', () => {
|
||||
expect(matchStartsWith('Hello world', 'hello').passed).toBe(true);
|
||||
});
|
||||
|
||||
it('should fail when not starting with expected', () => {
|
||||
expect(matchStartsWith('Hello world', 'world').passed).toBe(false);
|
||||
});
|
||||
});
|
||||
13
packages/eval-rubric/src/matchers/anyOf.ts
Normal file
13
packages/eval-rubric/src/matchers/anyOf.ts
Normal file
|
|
@ -0,0 +1,13 @@
|
|||
import type { RubricConfig } from '@lobechat/types';
|
||||
|
||||
import { normalize } from '../normalize';
|
||||
import type { MatchResult } from './types';
|
||||
|
||||
export const matchAnyOf = (actual: string, config: RubricConfig): MatchResult => {
|
||||
const cfg = config as { caseSensitive?: boolean; values: string[] };
|
||||
const candidates = cfg.values;
|
||||
const cs = cfg.caseSensitive ?? false;
|
||||
const a = normalize(actual, cs);
|
||||
const passed = candidates.some((c) => normalize(c, cs) === a);
|
||||
return { passed, score: passed ? 1 : 0 };
|
||||
};
|
||||
9
packages/eval-rubric/src/matchers/contains.ts
Normal file
9
packages/eval-rubric/src/matchers/contains.ts
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
import { normalize } from '../normalize';
|
||||
import type { MatchResult } from './types';
|
||||
|
||||
export const matchContains = (actual: string, expected: string | undefined): MatchResult => {
|
||||
const a = normalize(actual);
|
||||
const e = normalize(expected ?? '');
|
||||
const passed = a.includes(e);
|
||||
return { passed, score: passed ? 1 : 0 };
|
||||
};
|
||||
9
packages/eval-rubric/src/matchers/endsWith.ts
Normal file
9
packages/eval-rubric/src/matchers/endsWith.ts
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
import { normalize } from '../normalize';
|
||||
import type { MatchResult } from './types';
|
||||
|
||||
export const matchEndsWith = (actual: string, expected: string | undefined): MatchResult => {
|
||||
const a = normalize(actual);
|
||||
const e = normalize(expected ?? '');
|
||||
const passed = a.endsWith(e);
|
||||
return { passed, score: passed ? 1 : 0 };
|
||||
};
|
||||
9
packages/eval-rubric/src/matchers/equals.ts
Normal file
9
packages/eval-rubric/src/matchers/equals.ts
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
import { normalize } from '../normalize';
|
||||
import type { MatchResult } from './types';
|
||||
|
||||
export const matchEquals = (actual: string, expected: string | undefined): MatchResult => {
|
||||
const a = normalize(actual);
|
||||
const e = normalize(expected ?? '');
|
||||
const passed = a === e;
|
||||
return { passed, score: passed ? 1 : 0 };
|
||||
};
|
||||
76
packages/eval-rubric/src/matchers/index.ts
Normal file
76
packages/eval-rubric/src/matchers/index.ts
Normal file
|
|
@ -0,0 +1,76 @@
|
|||
import type { EvalBenchmarkRubric } from '@lobechat/types';
|
||||
|
||||
import { matchAnyOf } from './anyOf';
|
||||
import { matchContains } from './contains';
|
||||
import { matchEndsWith } from './endsWith';
|
||||
import { matchEquals } from './equals';
|
||||
import { matchJsonSchema } from './jsonSchema';
|
||||
import { matchLevenshtein } from './levenshtein';
|
||||
import { matchLLMRubric } from './llmRubric';
|
||||
import { matchNumeric } from './numeric';
|
||||
import { matchRegex } from './regex';
|
||||
import { matchStartsWith } from './startsWith';
|
||||
import type { MatchContext, MatchResult } from './types';
|
||||
|
||||
export type { GenerateObjectPayload, MatchContext, MatchResult } from './types';
|
||||
|
||||
/**
|
||||
* Run a single rubric matcher against actual vs expected
|
||||
*/
|
||||
export const match = async (
|
||||
params: { actual: string; expected: string | undefined; rubric: EvalBenchmarkRubric },
|
||||
context?: MatchContext,
|
||||
): Promise<MatchResult> => {
|
||||
const { actual, expected, rubric } = params;
|
||||
const { type, config } = rubric;
|
||||
|
||||
switch (type) {
|
||||
case 'equals': {
|
||||
return matchEquals(actual, expected);
|
||||
}
|
||||
|
||||
case 'contains': {
|
||||
return matchContains(actual, expected);
|
||||
}
|
||||
|
||||
case 'starts-with': {
|
||||
return matchStartsWith(actual, expected);
|
||||
}
|
||||
|
||||
case 'ends-with': {
|
||||
return matchEndsWith(actual, expected);
|
||||
}
|
||||
|
||||
case 'regex': {
|
||||
return matchRegex(actual, config);
|
||||
}
|
||||
|
||||
case 'any-of': {
|
||||
return matchAnyOf(actual, config);
|
||||
}
|
||||
|
||||
case 'numeric': {
|
||||
return matchNumeric(actual, expected, config);
|
||||
}
|
||||
|
||||
case 'levenshtein': {
|
||||
return matchLevenshtein(actual, expected, config);
|
||||
}
|
||||
|
||||
case 'llm-rubric': {
|
||||
return matchLLMRubric(actual, expected, rubric, context);
|
||||
}
|
||||
|
||||
case 'json-schema': {
|
||||
return matchJsonSchema(actual, config);
|
||||
}
|
||||
|
||||
default: {
|
||||
return {
|
||||
passed: false,
|
||||
reason: `Unsupported rubric type: ${type}`,
|
||||
score: 0,
|
||||
};
|
||||
}
|
||||
}
|
||||
};
|
||||
22
packages/eval-rubric/src/matchers/jsonSchema.ts
Normal file
22
packages/eval-rubric/src/matchers/jsonSchema.ts
Normal file
|
|
@ -0,0 +1,22 @@
|
|||
import type { RubricConfig } from '@lobechat/types';
|
||||
import Ajv from 'ajv';
|
||||
|
||||
import type { MatchResult } from './types';
|
||||
|
||||
export const matchJsonSchema = (actual: string, config: RubricConfig): MatchResult => {
|
||||
const cfg = config as { schema: Record<string, unknown> };
|
||||
let parsed: unknown;
|
||||
try {
|
||||
parsed = JSON.parse(actual);
|
||||
} catch {
|
||||
return { passed: false, reason: 'Output is not valid JSON', score: 0 };
|
||||
}
|
||||
const ajv = new Ajv();
|
||||
const validate = ajv.compile(cfg.schema);
|
||||
const valid = validate(parsed);
|
||||
return {
|
||||
passed: valid,
|
||||
reason: valid ? undefined : ajv.errorsText(validate.errors),
|
||||
score: valid ? 1 : 0,
|
||||
};
|
||||
};
|
||||
42
packages/eval-rubric/src/matchers/levenshtein.ts
Normal file
42
packages/eval-rubric/src/matchers/levenshtein.ts
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
import type { RubricConfig } from '@lobechat/types';
|
||||
|
||||
import { normalize } from '../normalize';
|
||||
import type { MatchResult } from './types';
|
||||
|
||||
export const matchLevenshtein = (
|
||||
actual: string,
|
||||
expected: string | undefined,
|
||||
config: RubricConfig,
|
||||
): MatchResult => {
|
||||
const cfg = config as { threshold?: number };
|
||||
const threshold = cfg.threshold ?? 0.8;
|
||||
const a = normalize(actual);
|
||||
const e = normalize(expected ?? '');
|
||||
const dist = levenshteinDistance(a, e);
|
||||
const maxLen = Math.max(a.length, e.length);
|
||||
const similarity = maxLen === 0 ? 1 : 1 - dist / maxLen;
|
||||
const passed = similarity >= threshold;
|
||||
return { passed, reason: `similarity=${similarity.toFixed(3)}`, score: similarity };
|
||||
};
|
||||
|
||||
function levenshteinDistance(a: string, b: string): number {
|
||||
const m = a.length;
|
||||
const n = b.length;
|
||||
const dp: number[][] = Array.from({ length: m + 1 }, () =>
|
||||
Array.from({ length: n + 1 }, () => 0),
|
||||
);
|
||||
|
||||
for (let i = 0; i <= m; i++) dp[i][0] = i;
|
||||
for (let j = 0; j <= n; j++) dp[0][j] = j;
|
||||
|
||||
for (let i = 1; i <= m; i++) {
|
||||
for (let j = 1; j <= n; j++) {
|
||||
dp[i][j] =
|
||||
a[i - 1] === b[j - 1]
|
||||
? dp[i - 1][j - 1]
|
||||
: 1 + Math.min(dp[i - 1][j], dp[i][j - 1], dp[i - 1][j - 1]);
|
||||
}
|
||||
}
|
||||
|
||||
return dp[m][n];
|
||||
}
|
||||
82
packages/eval-rubric/src/matchers/llmRubric.ts
Normal file
82
packages/eval-rubric/src/matchers/llmRubric.ts
Normal file
|
|
@ -0,0 +1,82 @@
|
|||
import type { EvalBenchmarkRubric, RubricConfigLLM } from '@lobechat/types';
|
||||
|
||||
import type { MatchContext, MatchResult } from './types';
|
||||
|
||||
const DEFAULT_SYSTEM_ROLE = [
|
||||
'You are an expert evaluation judge. Your task is to score how well an AI output meets the given criteria.',
|
||||
'',
|
||||
'Scoring rules:',
|
||||
'- Score 1.0: The output fully satisfies the criteria.',
|
||||
'- Score 0.0: The output completely fails to meet the criteria.',
|
||||
'- Use intermediate values (e.g. 0.3, 0.5, 0.7) for partial matches.',
|
||||
'',
|
||||
'Respond with a JSON object containing "score" (number 0-1) and "reason" (brief explanation).',
|
||||
].join('\n');
|
||||
|
||||
const JUDGE_SCORE_SCHEMA: Record<string, unknown> = {
|
||||
additionalProperties: false,
|
||||
properties: {
|
||||
reason: { description: 'Brief explanation for the score', type: 'string' },
|
||||
score: { description: 'Score from 0.0 to 1.0', maximum: 1, minimum: 0, type: 'number' },
|
||||
},
|
||||
required: ['score', 'reason'],
|
||||
type: 'object',
|
||||
};
|
||||
|
||||
function buildJudgeUserPrompt(
|
||||
criteria: string,
|
||||
actual: string,
|
||||
expected: string | undefined,
|
||||
): string {
|
||||
const parts = [`[Criteria]\n${criteria}`, `[Output]\n${actual}`];
|
||||
if (expected) {
|
||||
parts.push(`[Expected]\n${expected}`);
|
||||
}
|
||||
return parts.join('\n\n');
|
||||
}
|
||||
|
||||
export const matchLLMRubric = async (
|
||||
actual: string,
|
||||
expected: string | undefined,
|
||||
rubric: EvalBenchmarkRubric,
|
||||
context?: MatchContext,
|
||||
): Promise<MatchResult> => {
|
||||
if (!context?.generateObject) {
|
||||
return { passed: false, reason: 'LLM judge not available', score: 0 };
|
||||
}
|
||||
|
||||
const cfg = rubric.config as RubricConfigLLM;
|
||||
const criteria = cfg.criteria || 'Evaluate whether the output is correct and helpful.';
|
||||
const model = cfg.model || context.judgeModel;
|
||||
|
||||
if (!model) {
|
||||
return { passed: false, reason: 'No judge model configured', score: 0 };
|
||||
}
|
||||
|
||||
try {
|
||||
const result = await context.generateObject({
|
||||
messages: [
|
||||
{ content: cfg.systemRole || DEFAULT_SYSTEM_ROLE, role: 'system' },
|
||||
{ content: buildJudgeUserPrompt(criteria, actual, expected), role: 'user' },
|
||||
],
|
||||
model,
|
||||
provider: cfg.provider,
|
||||
schema: JUDGE_SCORE_SCHEMA,
|
||||
});
|
||||
|
||||
const score = Math.max(0, Math.min(1, result.score));
|
||||
const threshold = rubric.threshold ?? 0.6;
|
||||
|
||||
return {
|
||||
passed: score >= threshold,
|
||||
reason: result.reason,
|
||||
score,
|
||||
};
|
||||
} catch (error) {
|
||||
return {
|
||||
passed: false,
|
||||
reason: `LLM judge failed: ${error instanceof Error ? error.message : String(error)}`,
|
||||
score: 0,
|
||||
};
|
||||
}
|
||||
};
|
||||
19
packages/eval-rubric/src/matchers/numeric.ts
Normal file
19
packages/eval-rubric/src/matchers/numeric.ts
Normal file
|
|
@ -0,0 +1,19 @@
|
|||
import type { RubricConfig } from '@lobechat/types';
|
||||
|
||||
import type { MatchResult } from './types';
|
||||
|
||||
export const matchNumeric = (
|
||||
actual: string,
|
||||
expected: string | undefined,
|
||||
config: RubricConfig,
|
||||
): MatchResult => {
|
||||
const cfg = config as { tolerance?: number; value: number };
|
||||
const actualNum = Number.parseFloat(actual.replaceAll(/[^.\-\d]/g, ''));
|
||||
if (Number.isNaN(actualNum)) {
|
||||
return { passed: false, reason: `Could not parse number from "${actual}"`, score: 0 };
|
||||
}
|
||||
const tolerance = cfg.tolerance ?? 0.01;
|
||||
const expectedNum = expected !== undefined ? Number.parseFloat(expected) : cfg.value;
|
||||
const passed = Math.abs(actualNum - expectedNum) <= tolerance;
|
||||
return { passed, score: passed ? 1 : 0 };
|
||||
};
|
||||
9
packages/eval-rubric/src/matchers/regex.ts
Normal file
9
packages/eval-rubric/src/matchers/regex.ts
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
import type { RubricConfig } from '@lobechat/types';
|
||||
|
||||
import type { MatchResult } from './types';
|
||||
|
||||
export const matchRegex = (actual: string, config: RubricConfig): MatchResult => {
|
||||
const cfg = config as { pattern: string };
|
||||
const passed = new RegExp(cfg.pattern, 'i').test(actual);
|
||||
return { passed, score: passed ? 1 : 0 };
|
||||
};
|
||||
9
packages/eval-rubric/src/matchers/startsWith.ts
Normal file
9
packages/eval-rubric/src/matchers/startsWith.ts
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
import { normalize } from '../normalize';
|
||||
import type { MatchResult } from './types';
|
||||
|
||||
export const matchStartsWith = (actual: string, expected: string | undefined): MatchResult => {
|
||||
const a = normalize(actual);
|
||||
const e = normalize(expected ?? '');
|
||||
const passed = a.startsWith(e);
|
||||
return { passed, score: passed ? 1 : 0 };
|
||||
};
|
||||
17
packages/eval-rubric/src/matchers/types.ts
Normal file
17
packages/eval-rubric/src/matchers/types.ts
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
export interface GenerateObjectPayload {
|
||||
messages: { content: string; role: 'system' | 'user' }[];
|
||||
model: string;
|
||||
provider?: string;
|
||||
schema: Record<string, unknown>;
|
||||
}
|
||||
|
||||
export interface MatchContext {
|
||||
generateObject?: (payload: GenerateObjectPayload) => Promise<{ reason: string; score: number }>;
|
||||
judgeModel?: string;
|
||||
}
|
||||
|
||||
export interface MatchResult {
|
||||
passed: boolean;
|
||||
reason?: string;
|
||||
score: number;
|
||||
}
|
||||
7
packages/eval-rubric/src/normalize.ts
Normal file
7
packages/eval-rubric/src/normalize.ts
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
/**
|
||||
* Normalize text for comparison: trim whitespace, optionally lowercase
|
||||
*/
|
||||
export const normalize = (text: string, caseSensitive = false): string => {
|
||||
const trimmed = text.trim();
|
||||
return caseSensitive ? trimmed : trimmed.toLowerCase();
|
||||
};
|
||||
18
packages/eval-rubric/tsconfig.json
Normal file
18
packages/eval-rubric/tsconfig.json
Normal file
|
|
@ -0,0 +1,18 @@
|
|||
{
|
||||
"compilerOptions": {
|
||||
"module": "CommonJS",
|
||||
"target": "ESNext",
|
||||
"lib": ["dom", "dom.iterable", "esnext"],
|
||||
"sourceMap": true,
|
||||
"skipDefaultLibCheck": true,
|
||||
"allowSyntheticDefaultImports": true,
|
||||
"moduleResolution": "node",
|
||||
"forceConsistentCasingInFileNames": true,
|
||||
"noImplicitReturns": true,
|
||||
"noUnusedLocals": true,
|
||||
"resolveJsonModule": true,
|
||||
"skipLibCheck": true,
|
||||
"strict": true,
|
||||
"types": ["vitest/globals"]
|
||||
}
|
||||
}
|
||||
|
|
@ -672,4 +672,90 @@ describe('createCallbacksTransformer', () => {
|
|||
|
||||
expect(onToolsCalling).toHaveBeenCalledTimes(2);
|
||||
});
|
||||
|
||||
// Regression: stream errors silently swallowed by createCallbacksTransformer
|
||||
// These tests assert the CORRECT expected behavior. They will FAIL until the bug is fixed.
|
||||
describe('error event handling', () => {
|
||||
it('should call onError callback when stream contains an error event', async () => {
|
||||
const onError = vi.fn();
|
||||
const onText = vi.fn();
|
||||
const onCompletion = vi.fn();
|
||||
const transformer = createCallbacksTransformer({ onCompletion, onError, onText } as any);
|
||||
|
||||
const errorPayload = {
|
||||
body: { message: 'rate limit exceeded' },
|
||||
message: 'rate limit exceeded',
|
||||
type: 'ProviderBizError',
|
||||
};
|
||||
|
||||
const chunks = ['event: error\n', `data: ${JSON.stringify(errorPayload)}\n\n`];
|
||||
|
||||
await processChunks(transformer, chunks);
|
||||
|
||||
// onText should NOT be called
|
||||
expect(onText).not.toHaveBeenCalled();
|
||||
|
||||
// onError SHOULD be called with the error data
|
||||
expect(onError).toHaveBeenCalledOnce();
|
||||
expect(onError).toHaveBeenCalledWith(errorPayload);
|
||||
});
|
||||
|
||||
it('should include error in onCompletion data when stream has error after partial text', async () => {
|
||||
const onCompletion = vi.fn();
|
||||
const transformer = createCallbacksTransformer({ onCompletion } as any);
|
||||
|
||||
const errorPayload = {
|
||||
body: { message: 'content filter triggered' },
|
||||
message: 'content filter triggered',
|
||||
type: 'ProviderBizError',
|
||||
};
|
||||
|
||||
const chunks = [
|
||||
'event: text\n',
|
||||
'data: "Partial response"\n\n',
|
||||
'event: error\n',
|
||||
`data: ${JSON.stringify(errorPayload)}\n\n`,
|
||||
];
|
||||
|
||||
await processChunks(transformer, chunks);
|
||||
|
||||
// onCompletion should include the error so callers can detect the failure
|
||||
expect(onCompletion).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
error: errorPayload,
|
||||
text: 'Partial response',
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should surface first-chunk error via onError callback', async () => {
|
||||
// Simulates the full chain: provider throws → ERROR_CHUNK_PREFIX → FIRST_CHUNK_ERROR_KEY
|
||||
// → transformOpenAIStream returns { type: 'error' } → createSSEProtocolTransformer
|
||||
// → createCallbacksTransformer should handle 'error' in switch
|
||||
const onError = vi.fn();
|
||||
const onCompletion = vi.fn();
|
||||
const transformer = createCallbacksTransformer({ onCompletion, onError } as any);
|
||||
|
||||
const errorPayload = {
|
||||
body: { message: 'insufficient balance', status_code: 1008 },
|
||||
message: 'insufficient balance',
|
||||
type: 'ProviderBizError',
|
||||
};
|
||||
|
||||
const chunks = ['event: error\n', `data: ${JSON.stringify(errorPayload)}\n\n`];
|
||||
|
||||
await processChunks(transformer, chunks);
|
||||
|
||||
// onError should be called
|
||||
expect(onError).toHaveBeenCalledOnce();
|
||||
expect(onError).toHaveBeenCalledWith(errorPayload);
|
||||
|
||||
// onCompletion should include the error information
|
||||
expect(onCompletion).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
error: errorPayload,
|
||||
}),
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
|
|
|||
|
|
@ -266,6 +266,7 @@ export function createCallbacksTransformer(cb: ChatStreamCallbacks | undefined)
|
|||
let speed: ModelPerformance | undefined;
|
||||
let grounding: any;
|
||||
let toolsCalling: any;
|
||||
let streamError: any;
|
||||
// Track base64 images for accumulation
|
||||
const base64Images: Array<{ data: string; id: string }> = [];
|
||||
|
||||
|
|
@ -275,6 +276,7 @@ export function createCallbacksTransformer(cb: ChatStreamCallbacks | undefined)
|
|||
return new TransformStream<string, Uint8Array>({
|
||||
async flush(): Promise<void> {
|
||||
const data = {
|
||||
error: streamError,
|
||||
grounding,
|
||||
speed,
|
||||
text: aggregatedText,
|
||||
|
|
@ -385,6 +387,13 @@ export function createCallbacksTransformer(cb: ChatStreamCallbacks | undefined)
|
|||
toolsCalling = parseToolCalls(toolsCalling, data);
|
||||
|
||||
await callbacks.onToolsCalling?.({ chunk: data, toolsCalling });
|
||||
break;
|
||||
}
|
||||
|
||||
case 'error': {
|
||||
streamError = data;
|
||||
await callbacks.onError?.(data);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -7,13 +7,13 @@ export type LLMRoleType = 'user' | 'system' | 'assistant' | 'function' | 'tool';
|
|||
export type ChatResponseFormat =
|
||||
| { type: 'json_object' }
|
||||
| {
|
||||
json_schema: {
|
||||
name: string;
|
||||
schema: Record<string, any>;
|
||||
strict?: boolean;
|
||||
json_schema: {
|
||||
name: string;
|
||||
schema: Record<string, any>;
|
||||
strict?: boolean;
|
||||
};
|
||||
type: 'json_schema';
|
||||
};
|
||||
type: 'json_schema';
|
||||
};
|
||||
|
||||
interface UserMessageContentPartThinking {
|
||||
signature: string;
|
||||
|
|
@ -216,6 +216,7 @@ export interface ChatCompletionTool {
|
|||
}
|
||||
|
||||
export interface OnFinishData {
|
||||
error?: any;
|
||||
grounding?: any;
|
||||
speed?: ModelPerformance;
|
||||
text: string;
|
||||
|
|
@ -265,6 +266,8 @@ export interface ChatStreamCallbacks {
|
|||
* Used for models that return structured content with mixed text and images.
|
||||
*/
|
||||
onContentPart?: (data: ContentPartData) => Promise<void> | void;
|
||||
/** `onError`: Called when a stream error event is received from the provider. */
|
||||
onError?: (error: any) => Promise<void> | void;
|
||||
/**
|
||||
* `onFinal`: Called once when the stream is closed with the final completion message.
|
||||
**/
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ import type { OpenAIChatMessage } from './openai/chat';
|
|||
import type { LobeUniformTool } from './tool';
|
||||
import { LobeUniformToolSchema } from './tool';
|
||||
import type { ChatTopic } from './topic';
|
||||
import type { IThreadType } from './topic/thread';
|
||||
import type { ChatThreadType } from './topic/thread';
|
||||
import { ThreadType } from './topic/thread';
|
||||
|
||||
export interface SendNewMessage {
|
||||
|
|
@ -30,7 +30,7 @@ export interface CreateThreadWithMessageParams {
|
|||
/** Optional thread title */
|
||||
title?: string;
|
||||
/** Thread type */
|
||||
type: IThreadType;
|
||||
type: ChatThreadType;
|
||||
}
|
||||
|
||||
export interface SendMessageServerParams {
|
||||
|
|
|
|||
|
|
@ -2,12 +2,18 @@ import { z } from 'zod';
|
|||
|
||||
export const ThreadType = {
|
||||
Continuation: 'continuation',
|
||||
Eval: 'eval',
|
||||
Isolation: 'isolation',
|
||||
Standalone: 'standalone',
|
||||
} as const;
|
||||
|
||||
export type IThreadType = (typeof ThreadType)[keyof typeof ThreadType];
|
||||
|
||||
/**
|
||||
* Thread types available for chat (excludes eval-only types)
|
||||
*/
|
||||
export type ChatThreadType = Exclude<IThreadType, 'eval'>;
|
||||
|
||||
export enum ThreadStatus {
|
||||
Active = 'active',
|
||||
Cancel = 'cancel',
|
||||
|
|
@ -103,5 +109,10 @@ export const createThreadSchema = z.object({
|
|||
sourceMessageId: z.string().optional(),
|
||||
title: z.string().optional(),
|
||||
topicId: z.string(),
|
||||
type: z.enum([ThreadType.Continuation, ThreadType.Standalone, ThreadType.Isolation]),
|
||||
type: z.enum([
|
||||
ThreadType.Continuation,
|
||||
ThreadType.Eval,
|
||||
ThreadType.Standalone,
|
||||
ThreadType.Isolation,
|
||||
]),
|
||||
});
|
||||
|
|
|
|||
|
|
@ -106,6 +106,13 @@ export const formatTokenNumber = (num: number): string => {
|
|||
return kiloToken < 1000 ? `${kiloToken}K` : `${Math.floor(kiloToken / 1000)}M`;
|
||||
};
|
||||
|
||||
export const formatCost = (value: number): string => {
|
||||
return value.toLocaleString('en-US', {
|
||||
maximumSignificantDigits: 4,
|
||||
minimumSignificantDigits: 2,
|
||||
});
|
||||
};
|
||||
|
||||
export const formatPrice = (price: number, fractionDigits: number = 2) => {
|
||||
if (!price && price !== 0) return '--';
|
||||
|
||||
|
|
|
|||
68
packages/utils/src/sanitizeNullBytes.test.ts
Normal file
68
packages/utils/src/sanitizeNullBytes.test.ts
Normal file
|
|
@ -0,0 +1,68 @@
|
|||
import { describe, expect, it } from 'vitest';
|
||||
|
||||
import { sanitizeNullBytes } from './sanitizeNullBytes';
|
||||
|
||||
describe('sanitizeNullBytes', () => {
|
||||
it('should return null/undefined as-is', () => {
|
||||
expect(sanitizeNullBytes(null)).toBeNull();
|
||||
expect(sanitizeNullBytes(undefined)).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should return non-string primitives as-is', () => {
|
||||
expect(sanitizeNullBytes(42)).toBe(42);
|
||||
expect(sanitizeNullBytes(true)).toBe(true);
|
||||
});
|
||||
|
||||
// --- string ---
|
||||
|
||||
it('should remove null bytes from strings', () => {
|
||||
expect(sanitizeNullBytes('hello\u0000world')).toBe('helloworld');
|
||||
});
|
||||
|
||||
it('should handle multiple null bytes in strings', () => {
|
||||
expect(sanitizeNullBytes('\u0000a\u0000b\u0000')).toBe('ab');
|
||||
});
|
||||
|
||||
it('should preserve valid strings', () => {
|
||||
expect(sanitizeNullBytes('montée')).toBe('montée');
|
||||
});
|
||||
|
||||
// --- object / jsonb ---
|
||||
|
||||
it('should recover corrupted Unicode \\u0000XX → \\u00XX in objects', () => {
|
||||
// Simulate the real bug: "montée" encoded as "mont\u0000e9e" in JSON
|
||||
// \u0000 is null byte, followed by "e9" which should have been \u00e9 (é)
|
||||
const corrupted = JSON.parse('{"query":"mont\\u0000e9e"}');
|
||||
const result = sanitizeNullBytes(corrupted);
|
||||
expect(result.query).toBe('montée');
|
||||
});
|
||||
|
||||
it('should strip remaining null bytes in objects after recovery', () => {
|
||||
const obj = { text: 'a\u0000b', nested: { val: 'x\u0000y' } };
|
||||
const result = sanitizeNullBytes(obj);
|
||||
expect(result.text).toBe('ab');
|
||||
expect(result.nested.val).toBe('xy');
|
||||
});
|
||||
|
||||
it('should handle real-world web search state with corrupted Unicode', () => {
|
||||
const state = {
|
||||
query: 'Auxerre mont\u0000e Ligue 1',
|
||||
results: [{ content: 'Some result with null\u0000byte', url: 'https://example.com' }],
|
||||
};
|
||||
const result = sanitizeNullBytes(state);
|
||||
expect(result.query).toBe('Auxerre monte Ligue 1');
|
||||
expect(result.results[0].content).toBe('Some result with nullbyte');
|
||||
expect(JSON.stringify(result)).not.toContain('\u0000');
|
||||
});
|
||||
|
||||
it('should handle objects without null bytes (no-op)', () => {
|
||||
const obj = { a: 1, b: 'hello', c: [1, 2, 3] };
|
||||
expect(sanitizeNullBytes(obj)).toEqual(obj);
|
||||
});
|
||||
|
||||
it('should handle arrays', () => {
|
||||
const arr = ['a\u0000b', 'c\u0000d'];
|
||||
const result = sanitizeNullBytes(arr);
|
||||
expect(result).toEqual(['ab', 'cd']);
|
||||
});
|
||||
});
|
||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue