mirror of
https://github.com/lobehub/lobehub
synced 2026-04-21 17:47:27 +00:00
🐛 fix: add SSRF protection (#10152)
This commit is contained in:
parent
8219124a10
commit
4f7bc5acd2
77 changed files with 367 additions and 321 deletions
11
.env.example
11
.env.example
|
|
@ -13,6 +13,17 @@
|
|||
# Default is '0' (enabled)
|
||||
# ENABLED_CSP=1
|
||||
|
||||
# SSRF Protection Settings
|
||||
# Set to '1' to allow connections to private IP addresses (disable SSRF protection)
|
||||
# WARNING: Only enable this in trusted environments
|
||||
# Default is '0' (SSRF protection enabled)
|
||||
# SSRF_ALLOW_PRIVATE_IP_ADDRESS=0
|
||||
|
||||
# Whitelist of allowed private IP addresses (comma-separated)
|
||||
# Only takes effect when SSRF_ALLOW_PRIVATE_IP_ADDRESS is '0'
|
||||
# Example: Allow specific internal servers while keeping SSRF protection
|
||||
# SSRF_ALLOW_IP_ADDRESS_LIST=192.168.1.100,10.0.0.50
|
||||
|
||||
########################################
|
||||
########## AI Provider Service #########
|
||||
########################################
|
||||
|
|
|
|||
|
|
@ -127,16 +127,62 @@ For specific content, please refer to the [Feature Flags](/docs/self-hosting/adv
|
|||
### `SSRF_ALLOW_PRIVATE_IP_ADDRESS`
|
||||
|
||||
- Type: Optional
|
||||
- Description: Allow to connect private IP address. In a trusted environment, it can be set to true to turn off SSRF protection.
|
||||
- Description: Controls whether to allow connections to private IP addresses. Set to `1` to disable SSRF protection and allow all private IP addresses. In a trusted environment (e.g., internal network), this can be enabled to allow access to internal resources.
|
||||
- Default: `0`
|
||||
- Example: `1` or `0`
|
||||
|
||||
<Callout type="warning">
|
||||
**Security Notice**: Enabling this option will disable SSRF protection and allow connections to private
|
||||
IP addresses (127.0.0.0/8, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, etc.). Only enable this in
|
||||
trusted environments where you need to access internal network resources.
|
||||
</Callout>
|
||||
|
||||
**Use Cases**:
|
||||
|
||||
LobeChat performs SSRF security checks in the following scenarios:
|
||||
|
||||
1. **Image/Video URL to Base64 Conversion**: When processing media messages (e.g., vision models, multimodal models), LobeChat converts image and video URLs to base64 format. This check prevents malicious users from accessing internal network resources.
|
||||
|
||||
Examples:
|
||||
|
||||
- Image: A user sends an image message with URL `http://192.168.1.100/admin/secrets.png`
|
||||
- Video: A user sends a video message with URL `http://10.0.0.50/internal/meeting.mp4`
|
||||
|
||||
Without SSRF protection, these requests could expose internal network resources.
|
||||
|
||||
2. **Web Crawler**: When using web crawling features to fetch external content.
|
||||
|
||||
3. **Proxy Requests**: When proxying external API requests.
|
||||
|
||||
**Configuration Examples**:
|
||||
|
||||
```bash
|
||||
# Scenario 1: Public deployment (recommended)
|
||||
# Block all private IP addresses for security
|
||||
SSRF_ALLOW_PRIVATE_IP_ADDRESS=0
|
||||
|
||||
# Scenario 2: Internal deployment
|
||||
# Allow all private IP addresses to access internal image servers
|
||||
SSRF_ALLOW_PRIVATE_IP_ADDRESS=1
|
||||
|
||||
# Scenario 3: Hybrid deployment (most common)
|
||||
# Block private IPs by default, but allow specific trusted internal servers
|
||||
SSRF_ALLOW_PRIVATE_IP_ADDRESS=0
|
||||
SSRF_ALLOW_IP_ADDRESS_LIST=192.168.1.100,10.0.0.50
|
||||
```
|
||||
|
||||
### `SSRF_ALLOW_IP_ADDRESS_LIST`
|
||||
|
||||
- Type: Optional
|
||||
- Description: Allow private IP address list, multiple IP addresses are separated by commas. Only when `SSRF_ALLOW_PRIVATE_IP_ADDRESS` is `0`, it takes effect.
|
||||
- Description: Whitelist of allowed IP addresses, separated by commas. Only takes effect when `SSRF_ALLOW_PRIVATE_IP_ADDRESS` is `0`. Use this to allow specific internal IP addresses while keeping SSRF protection enabled for other private IPs.
|
||||
- Default: -
|
||||
- Example: `198.18.1.62,224.0.0.3`
|
||||
- Example: `192.168.1.100,10.0.0.50,172.16.0.10`
|
||||
|
||||
**Common Use Cases**:
|
||||
|
||||
- Allow access to internal image storage server: `192.168.1.100`
|
||||
- Allow access to internal API gateway: `10.0.0.50`
|
||||
- Allow access to internal documentation server: `172.16.0.10`
|
||||
|
||||
### `ENABLE_AUTH_PROTECTION`
|
||||
|
||||
|
|
|
|||
|
|
@ -123,16 +123,61 @@ LobeChat 在部署时提供了一些额外的配置项,你可以使用环境
|
|||
### `SSRF_ALLOW_PRIVATE_IP_ADDRESS`
|
||||
|
||||
- 类型:可选
|
||||
- 描述:是否允许连接私有 IP 地址。在可信环境中可以设置为 true 来关闭 SSRF 防护。
|
||||
- 描述:控制是否允许连接私有 IP 地址。设置为 `1` 时将关闭 SSRF 防护并允许所有私有 IP 地址。在可信环境(如内网部署)中,可以启用此选项以访问内部资源。
|
||||
- 默认值:`0`
|
||||
- 示例:`1` or `0`
|
||||
- 示例:`1` 或 `0`
|
||||
|
||||
<Callout type="warning">
|
||||
**安全提示**:启用此选项将关闭 SSRF 防护,允许连接私有 IP 地址段(127.0.0.0/8、10.0.0.0/8、172.16.0.0/12、192.168.0.0/16
|
||||
等)。仅在需要访问内网资源的可信环境中启用。
|
||||
</Callout>
|
||||
|
||||
**应用场景**:
|
||||
|
||||
LobeChat 会在以下场景执行 SSRF 安全检查:
|
||||
|
||||
1. **图片 / 视频 URL 转 Base64**:在处理媒体消息时(例如视觉模型、多模态模型),LobeChat 会将图片和视频 URL 转换为 base64 格式。此检查可防止恶意用户通过媒体 URL 访问内网资源。
|
||||
|
||||
举例:
|
||||
|
||||
- 图片:用户发送图片消息,URL 为 `http://192.168.1.100/admin/secrets.png`
|
||||
- 视频:用户发送视频消息,URL 为 `http://10.0.0.50/internal/meeting.mp4`
|
||||
|
||||
若无 SSRF 防护,这些请求可能导致内网资源泄露。
|
||||
|
||||
2. **网页爬取**:使用网页爬取功能获取外部内容时。
|
||||
|
||||
3. **代理请求**:代理外部 API 请求时。
|
||||
|
||||
**配置示例**:
|
||||
|
||||
```bash
|
||||
# 场景 1:公网部署(推荐)
|
||||
# 阻止所有私有 IP 访问,保证安全
|
||||
SSRF_ALLOW_PRIVATE_IP_ADDRESS=0
|
||||
|
||||
# 场景 2:内网部署
|
||||
# 允许所有私有 IP,可访问内网图片服务器等资源
|
||||
SSRF_ALLOW_PRIVATE_IP_ADDRESS=1
|
||||
|
||||
# 场景 3:混合部署(最常见)
|
||||
# 默认阻止私有 IP,但允许特定可信的内网服务器
|
||||
SSRF_ALLOW_PRIVATE_IP_ADDRESS=0
|
||||
SSRF_ALLOW_IP_ADDRESS_LIST=192.168.1.100,10.0.0.50
|
||||
```
|
||||
|
||||
### `SSRF_ALLOW_IP_ADDRESS_LIST`
|
||||
|
||||
- 类型:可选
|
||||
- 说明:允许的私有 IP 地址列表,多个 IP 地址用逗号分隔。仅在 `SSRF_ALLOW_PRIVATE_IP_ADDRESS` 为 `0` 时生效。
|
||||
- 描述:允许访问的 IP 地址白名单,多个 IP 地址用逗号分隔。仅在 `SSRF_ALLOW_PRIVATE_IP_ADDRESS` 为 `0` 时生效。使用此选项可以在保持 SSRF 防护的同时,允许访问特定的内网 IP 地址。
|
||||
- 默认值:-
|
||||
- 示例:`198.18.1.62,224.0.0.3`
|
||||
- 示例:`192.168.1.100,10.0.0.50,172.16.0.10`
|
||||
|
||||
**常见使用场景**:
|
||||
|
||||
- 允许访问内网图片存储服务器:`192.168.1.100`
|
||||
- 允许访问内网 API 网关:`10.0.0.50`
|
||||
- 允许访问内网文档服务器:`172.16.0.10`
|
||||
|
||||
### `ENABLE_AUTH_PROTECTION`
|
||||
|
||||
|
|
|
|||
|
|
@ -154,6 +154,7 @@
|
|||
"@lobechat/database": "workspace:*",
|
||||
"@lobechat/electron-client-ipc": "workspace:*",
|
||||
"@lobechat/electron-server-ipc": "workspace:*",
|
||||
"@lobechat/fetch-sse": "workspace:*",
|
||||
"@lobechat/file-loaders": "workspace:*",
|
||||
"@lobechat/model-runtime": "workspace:*",
|
||||
"@lobechat/observability-otel": "workspace:*",
|
||||
|
|
|
|||
29
packages/fetch-sse/package.json
Normal file
29
packages/fetch-sse/package.json
Normal file
|
|
@ -0,0 +1,29 @@
|
|||
{
|
||||
"name": "@lobechat/fetch-sse",
|
||||
"version": "1.0.0",
|
||||
"private": true,
|
||||
"description": "SSE fetch utilities with streaming support",
|
||||
"exports": {
|
||||
".": {
|
||||
"types": "./src/index.ts",
|
||||
"default": "./src/index.ts"
|
||||
},
|
||||
"./parseError": {
|
||||
"types": "./src/parseError.ts",
|
||||
"default": "./src/parseError.ts"
|
||||
}
|
||||
},
|
||||
"main": "./src/index.ts",
|
||||
"types": "./src/index.ts",
|
||||
"scripts": {
|
||||
"test": "vitest",
|
||||
"test:coverage": "vitest --coverage --silent='passed-only'"
|
||||
},
|
||||
"dependencies": {
|
||||
"@lobechat/const": "workspace:*",
|
||||
"@lobechat/model-runtime": "workspace:*",
|
||||
"@lobechat/types": "workspace:*",
|
||||
"@lobechat/utils": "workspace:*",
|
||||
"i18next": "^24.2.1"
|
||||
}
|
||||
}
|
||||
|
|
@ -1,10 +1,10 @@
|
|||
import { MESSAGE_CANCEL_FLAT } from '@lobechat/const';
|
||||
import { ChatMessageError } from '@lobechat/types';
|
||||
import { FetchEventSourceInit } from '@lobechat/utils/client/fetchEventSource/index';
|
||||
import { fetchEventSource } from '@lobechat/utils/client/fetchEventSource/index';
|
||||
import { sleep } from '@lobechat/utils/sleep';
|
||||
import { afterEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { FetchEventSourceInit } from '../../client/fetchEventSource';
|
||||
import { fetchEventSource } from '../../client/fetchEventSource';
|
||||
import { sleep } from '../../sleep';
|
||||
import { fetchSSE } from '../fetchSSE';
|
||||
|
||||
// 模拟 i18next
|
||||
|
|
@ -12,7 +12,7 @@ vi.mock('i18next', () => ({
|
|||
t: vi.fn((key) => `translated_${key}`),
|
||||
}));
|
||||
|
||||
vi.mock('../../client/fetchEventSource', () => ({
|
||||
vi.mock('@lobechat/utils/client/fetchEventSource/index', () => ({
|
||||
fetchEventSource: vi.fn(),
|
||||
}));
|
||||
|
||||
|
|
@ -1,14 +1,14 @@
|
|||
import { ErrorResponse } from '@lobechat/types';
|
||||
import { afterEach, describe, expect, it, vi } from 'vitest';
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { getMessageError } from '../parseError';
|
||||
|
||||
// 模拟 i18next
|
||||
// Mock i18next
|
||||
vi.mock('i18next', () => ({
|
||||
t: vi.fn((key) => `translated_${key}`),
|
||||
}));
|
||||
|
||||
// 模拟 Response
|
||||
// Mock Response
|
||||
const createMockResponse = (body: any, ok: boolean, status: number = 200) => ({
|
||||
ok,
|
||||
status,
|
||||
|
|
@ -38,11 +38,14 @@ const createMockResponse = (body: any, ok: boolean, status: number = 200) => ({
|
|||
},
|
||||
});
|
||||
|
||||
// 在每次测试后清理所有模拟
|
||||
afterEach(() => {
|
||||
vi.restoreAllMocks();
|
||||
});
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
describe('getMessageError', () => {
|
||||
it('should handle business error correctly', async () => {
|
||||
const mockErrorResponse: ErrorResponse = {
|
||||
|
|
@ -12,9 +12,9 @@ import {
|
|||
ResponseAnimation,
|
||||
ResponseAnimationStyle,
|
||||
} from '@lobechat/types';
|
||||
import { fetchEventSource } from '@lobechat/utils/client/fetchEventSource/index';
|
||||
import { nanoid } from '@lobechat/utils/uuid';
|
||||
|
||||
import { fetchEventSource } from '../client/fetchEventSource';
|
||||
import { nanoid } from '../uuid';
|
||||
import { getMessageError } from './parseError';
|
||||
|
||||
type SSEFinishType = 'done' | 'error' | 'abort';
|
||||
|
|
@ -1,7 +1,7 @@
|
|||
import { ChatMessageError, ErrorResponse, ErrorType } from '@lobechat/types';
|
||||
import { t } from 'i18next';
|
||||
|
||||
export const getMessageError = async (response: Response) => {
|
||||
export const getMessageError = async (response: Response): Promise<ChatMessageError> => {
|
||||
let chatMessageError: ChatMessageError;
|
||||
|
||||
// try to get the biz error
|
||||
|
|
@ -9,13 +9,13 @@ export const getMessageError = async (response: Response) => {
|
|||
const data = (await response.json()) as ErrorResponse;
|
||||
chatMessageError = {
|
||||
body: data.body,
|
||||
message: t(`response.${data.errorType}` as any, { ns: 'error' }),
|
||||
message: t(`response.${data.errorType}`, { ns: 'error' }),
|
||||
type: data.errorType,
|
||||
};
|
||||
} catch {
|
||||
// if not return, then it's a common error
|
||||
chatMessageError = {
|
||||
message: t(`response.${response.status}` as any, { ns: 'error' }),
|
||||
message: t(`response.${response.status}`, { ns: 'error' }),
|
||||
type: response.status as ErrorType,
|
||||
};
|
||||
}
|
||||
|
|
@ -1,8 +1,8 @@
|
|||
import { imageUrlToBase64 } from '@lobechat/utils';
|
||||
import { OpenAI } from 'openai';
|
||||
import { describe, expect, it, vi } from 'vitest';
|
||||
import { beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { OpenAIChatMessage, UserMessageContentPart } from '../../types/chat';
|
||||
import { imageUrlToBase64 } from '../../utils/imageToBase64';
|
||||
import { parseDataUri } from '../../utils/uriParser';
|
||||
import {
|
||||
buildAnthropicBlock,
|
||||
|
|
@ -12,16 +12,22 @@ import {
|
|||
} from './anthropic';
|
||||
|
||||
// Mock the parseDataUri function since it's an implementation detail
|
||||
vi.mock('../../utils/uriParser', () => ({
|
||||
parseDataUri: vi.fn().mockReturnValue({
|
||||
mimeType: 'image/jpeg',
|
||||
base64: 'base64EncodedString',
|
||||
type: 'base64',
|
||||
}),
|
||||
vi.mock('../../utils/uriParser');
|
||||
vi.mock('@lobechat/utils', () => ({
|
||||
imageUrlToBase64: vi.fn(),
|
||||
}));
|
||||
vi.mock('../../utils/imageToBase64');
|
||||
|
||||
describe('anthropicHelpers', () => {
|
||||
beforeEach(() => {
|
||||
vi.resetAllMocks();
|
||||
// Set default mock implementation for parseDataUri
|
||||
vi.mocked(parseDataUri).mockReturnValue({
|
||||
mimeType: 'image/jpeg',
|
||||
base64: 'base64EncodedString',
|
||||
type: 'base64',
|
||||
});
|
||||
});
|
||||
|
||||
describe('buildAnthropicBlock', () => {
|
||||
it('should return the content as is for text type', async () => {
|
||||
const content: UserMessageContentPart = { type: 'text', text: 'Hello!' };
|
||||
|
|
@ -52,7 +58,7 @@ describe('anthropicHelpers', () => {
|
|||
base64: null,
|
||||
type: 'url',
|
||||
});
|
||||
vi.mocked(imageUrlToBase64).mockResolvedValue({
|
||||
vi.mocked(imageUrlToBase64).mockResolvedValueOnce({
|
||||
base64: 'convertedBase64String',
|
||||
mimeType: 'image/jpg',
|
||||
});
|
||||
|
|
@ -82,7 +88,7 @@ describe('anthropicHelpers', () => {
|
|||
base64: null,
|
||||
type: 'url',
|
||||
});
|
||||
vi.mocked(imageUrlToBase64).mockResolvedValue({
|
||||
vi.mocked(imageUrlToBase64).mockResolvedValueOnce({
|
||||
base64: 'convertedBase64String',
|
||||
mimeType: 'image/png',
|
||||
});
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
import Anthropic from '@anthropic-ai/sdk';
|
||||
import { imageUrlToBase64 } from '@lobechat/utils';
|
||||
import OpenAI from 'openai';
|
||||
|
||||
import { OpenAIChatMessage, UserMessageContentPart } from '../../types';
|
||||
import { imageUrlToBase64 } from '../../utils/imageToBase64';
|
||||
import { parseDataUri } from '../../utils/uriParser';
|
||||
|
||||
export const buildAnthropicBlock = async (
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
// @vitest-environment node
|
||||
import { Type as SchemaType } from '@google/genai';
|
||||
import * as imageToBase64Module from '@lobechat/utils';
|
||||
import { describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { ChatCompletionTool, OpenAIChatMessage, UserMessageContentPart } from '../../types';
|
||||
import * as imageToBase64Module from '../../utils/imageToBase64';
|
||||
import { parseDataUri } from '../../utils/uriParser';
|
||||
import {
|
||||
buildGoogleMessage,
|
||||
|
|
|
|||
|
|
@ -5,9 +5,9 @@ import {
|
|||
Part,
|
||||
Type as SchemaType,
|
||||
} from '@google/genai';
|
||||
import { imageUrlToBase64 } from '@lobechat/utils';
|
||||
|
||||
import { ChatCompletionTool, OpenAIChatMessage, UserMessageContentPart } from '../../types';
|
||||
import { imageUrlToBase64 } from '../../utils/imageToBase64';
|
||||
import { safeParseJSON } from '../../utils/safeParseJSON';
|
||||
import { parseDataUri } from '../../utils/uriParser';
|
||||
|
||||
|
|
@ -64,12 +64,9 @@ export const buildGooglePart = async (
|
|||
}
|
||||
|
||||
if (type === 'url') {
|
||||
// For video URLs, we need to fetch and convert to base64
|
||||
// Use imageUrlToBase64 for SSRF protection (works for any binary data including videos)
|
||||
// Note: This might need size/duration limits for practical use
|
||||
const response = await fetch(content.video_url.url);
|
||||
const arrayBuffer = await response.arrayBuffer();
|
||||
const base64 = Buffer.from(arrayBuffer).toString('base64');
|
||||
const mimeType = response.headers.get('content-type') || 'video/mp4';
|
||||
const { base64, mimeType } = await imageUrlToBase64(content.video_url.url);
|
||||
|
||||
return {
|
||||
inlineData: { data: base64, mimeType },
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
import { imageUrlToBase64 } from '@lobechat/utils';
|
||||
import OpenAI from 'openai';
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { OpenAIChatMessage } from '../../types';
|
||||
import { imageUrlToBase64 } from '../../utils/imageToBase64';
|
||||
import { parseDataUri } from '../../utils/uriParser';
|
||||
import {
|
||||
convertImageUrlToFile,
|
||||
|
|
@ -12,7 +12,9 @@ import {
|
|||
} from './openai';
|
||||
|
||||
// 模拟依赖
|
||||
vi.mock('../../utils/imageToBase64');
|
||||
vi.mock('@lobechat/utils', () => ({
|
||||
imageUrlToBase64: vi.fn(),
|
||||
}));
|
||||
vi.mock('../../utils/uriParser');
|
||||
|
||||
describe('convertMessageContent', () => {
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
import { imageUrlToBase64 } from '@lobechat/utils';
|
||||
import OpenAI, { toFile } from 'openai';
|
||||
|
||||
import { disableStreamModels, systemToUserModels } from '../../const/models';
|
||||
import { ChatStreamPayload, OpenAIChatMessage } from '../../types';
|
||||
import { imageUrlToBase64 } from '../../utils/imageToBase64';
|
||||
import { parseDataUri } from '../../utils/uriParser';
|
||||
|
||||
export const convertMessageContent = async (
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
// @vitest-environment node
|
||||
import * as imageToBase64Module from '@lobechat/utils';
|
||||
import OpenAI from 'openai';
|
||||
import { beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { CreateImagePayload } from '../../types/image';
|
||||
import * as imageToBase64Module from '../../utils/imageToBase64';
|
||||
import * as uriParserModule from '../../utils/uriParser';
|
||||
import { createOpenAICompatibleImage } from './createImage';
|
||||
|
||||
|
|
|
|||
|
|
@ -1,3 +1,4 @@
|
|||
import { imageUrlToBase64 } from '@lobechat/utils';
|
||||
import { cleanObject } from '@lobechat/utils/object';
|
||||
import createDebug from 'debug';
|
||||
import { RuntimeImageGenParamsValue } from 'model-bank';
|
||||
|
|
@ -5,7 +6,6 @@ import OpenAI from 'openai';
|
|||
|
||||
import { CreateImagePayload, CreateImageResponse } from '../../types/image';
|
||||
import { getModelPricing } from '../../utils/getModelPricing';
|
||||
import { imageUrlToBase64 } from '../../utils/imageToBase64';
|
||||
import { parseDataUri } from '../../utils/uriParser';
|
||||
import { convertImageUrlToFile } from '../contextBuilders/openai';
|
||||
import { convertOpenAIImageUsage } from '../usageConverters/openai';
|
||||
|
|
|
|||
|
|
@ -1,15 +1,12 @@
|
|||
// @vitest-environment node
|
||||
import {
|
||||
AgentRuntimeErrorType,
|
||||
ChatStreamCallbacks,
|
||||
ChatStreamPayload,
|
||||
LobeOpenAICompatibleRuntime,
|
||||
} from '@lobechat/model-runtime';
|
||||
import { ModelProvider } from 'model-bank';
|
||||
import OpenAI from 'openai';
|
||||
import type { Stream } from 'openai/streaming';
|
||||
import { Mock, afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { LobeOpenAICompatibleRuntime } from '../../core/BaseAI';
|
||||
import { ChatStreamCallbacks, ChatStreamPayload } from '../../types/chat';
|
||||
import { AgentRuntimeErrorType } from '../../types/error';
|
||||
import * as debugStreamModule from '../../utils/debugStream';
|
||||
import * as openaiHelpers from '../contextBuilders/openai';
|
||||
import { createOpenAICompatibleRuntime } from './index';
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
import { AgentRuntimeErrorType } from '@lobechat/model-runtime';
|
||||
import { describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { AgentRuntimeErrorType } from '../../../types/error';
|
||||
import { FIRST_CHUNK_ERROR_KEY } from '../protocol';
|
||||
import { createReadableStream, readStreamChunk } from '../utils';
|
||||
import { OpenAIResponsesStream } from './responsesStream';
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
import { ChatMethodOptions } from '@lobechat/model-runtime';
|
||||
import debug from 'debug';
|
||||
|
||||
import { ChatMethodOptions } from '../types/chat';
|
||||
|
||||
const log = debug('model-runtime:helpers:mergeChatMethodOptions');
|
||||
|
||||
export const mergeMultipleChatMethodOptions = (options: ChatMethodOptions[]): ChatMethodOptions => {
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
// @vitest-environment edge-runtime
|
||||
// @vitest-environment node
|
||||
import { ModelProvider } from 'model-bank';
|
||||
import { beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
// @vitest-environment edge-runtime
|
||||
// @vitest-environment node
|
||||
import { describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { createAnthropicGenerateObject } from './generateObject';
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
// @vitest-environment node
|
||||
import { ChatCompletionTool, ChatStreamPayload } from '@lobechat/model-runtime';
|
||||
import { Mock, afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import * as anthropicHelpers from '../../core/contextBuilders/anthropic';
|
||||
import { ChatCompletionTool, ChatStreamPayload } from '../../types/chat';
|
||||
import * as debugStreamModule from '../../utils/debugStream';
|
||||
import { LobeAnthropicAI } from './index';
|
||||
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
// @vitest-environment node
|
||||
import { LobeOpenAICompatibleRuntime } from '@lobechat/model-runtime';
|
||||
import { ModelProvider } from 'model-bank';
|
||||
import { Mock, afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { LobeOpenAICompatibleRuntime } from '../../core/BaseAI';
|
||||
import { testProvider } from '../../providerTestUtils';
|
||||
import { LobeBaichuanAI, params } from './index';
|
||||
|
||||
|
|
|
|||
|
|
@ -3,10 +3,10 @@ import {
|
|||
InvokeModelCommand,
|
||||
InvokeModelWithResponseStreamCommand,
|
||||
} from '@aws-sdk/client-bedrock-runtime';
|
||||
import { AgentRuntimeErrorType } from '@lobechat/model-runtime';
|
||||
import { ModelProvider } from 'model-bank';
|
||||
import { Mock, afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { AgentRuntimeErrorType } from '../../types/error';
|
||||
import * as debugStreamModule from '../../utils/debugStream';
|
||||
import { LobeBedrockAI, experimental_buildLlama2Prompt } from './index';
|
||||
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ import { createBflImage } from './createImage';
|
|||
import { BflStatusResponse } from './types';
|
||||
|
||||
// Mock external dependencies
|
||||
vi.mock('../../utils/imageToBase64', () => ({
|
||||
vi.mock('@lobechat/utils', () => ({
|
||||
imageUrlToBase64: vi.fn(),
|
||||
}));
|
||||
|
||||
|
|
@ -187,7 +187,7 @@ describe('createBflImage', () => {
|
|||
it('should convert single imageUrl to image_prompt base64', async () => {
|
||||
// Arrange
|
||||
const { parseDataUri } = await import('../../utils/uriParser');
|
||||
const { imageUrlToBase64 } = await import('../../utils/imageToBase64');
|
||||
const { imageUrlToBase64 } = await import('@lobechat/utils');
|
||||
const { asyncifyPolling } = await import('../../utils/asyncifyPolling');
|
||||
|
||||
const mockParseDataUri = vi.mocked(parseDataUri);
|
||||
|
|
@ -290,7 +290,7 @@ describe('createBflImage', () => {
|
|||
it('should convert multiple imageUrls for Kontext models', async () => {
|
||||
// Arrange
|
||||
const { parseDataUri } = await import('../../utils/uriParser');
|
||||
const { imageUrlToBase64 } = await import('../../utils/imageToBase64');
|
||||
const { imageUrlToBase64 } = await import('@lobechat/utils');
|
||||
const { asyncifyPolling } = await import('../../utils/asyncifyPolling');
|
||||
|
||||
const mockParseDataUri = vi.mocked(parseDataUri);
|
||||
|
|
@ -350,7 +350,7 @@ describe('createBflImage', () => {
|
|||
it('should limit imageUrls to maximum 4 images', async () => {
|
||||
// Arrange
|
||||
const { parseDataUri } = await import('../../utils/uriParser');
|
||||
const { imageUrlToBase64 } = await import('../../utils/imageToBase64');
|
||||
const { imageUrlToBase64 } = await import('@lobechat/utils');
|
||||
const { asyncifyPolling } = await import('../../utils/asyncifyPolling');
|
||||
|
||||
const mockParseDataUri = vi.mocked(parseDataUri);
|
||||
|
|
|
|||
|
|
@ -1,3 +1,4 @@
|
|||
import { imageUrlToBase64 } from '@lobechat/utils';
|
||||
import createDebug from 'debug';
|
||||
import { RuntimeImageGenParamsValue } from 'model-bank';
|
||||
|
||||
|
|
@ -5,7 +6,6 @@ import { AgentRuntimeErrorType } from '../../types/error';
|
|||
import { CreateImagePayload, CreateImageResponse } from '../../types/image';
|
||||
import { type TaskResult, asyncifyPolling } from '../../utils/asyncifyPolling';
|
||||
import { AgentRuntimeError } from '../../utils/createError';
|
||||
import { imageUrlToBase64 } from '../../utils/imageToBase64';
|
||||
import { parseDataUri } from '../../utils/uriParser';
|
||||
import {
|
||||
BFL_ENDPOINTS,
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
// @vitest-environment node
|
||||
import { ChatCompletionTool } from '@lobechat/model-runtime';
|
||||
import { Mock, afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { ChatCompletionTool } from '../../types/chat';
|
||||
import * as debugStreamModule from '../../utils/debugStream';
|
||||
import { LobeCloudflareAI } from './index';
|
||||
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
// @vitest-environment node
|
||||
import { LobeOpenAICompatibleRuntime } from '@lobechat/model-runtime';
|
||||
import { ModelProvider } from 'model-bank';
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { LobeOpenAICompatibleRuntime } from '../../core/BaseAI';
|
||||
import { testProvider } from '../../providerTestUtils';
|
||||
import { LobeCohereAI, params } from './index';
|
||||
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
// @vitest-environment edge-runtime
|
||||
// @vitest-environment node
|
||||
import { GoogleGenAI } from '@google/genai';
|
||||
import * as imageToBase64Module from '@lobechat/utils';
|
||||
import { beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { CreateImagePayload } from '../../types/image';
|
||||
import * as imageToBase64Module from '../../utils/imageToBase64';
|
||||
import { createGoogleImage } from './createImage';
|
||||
|
||||
const provider = 'google';
|
||||
|
|
|
|||
|
|
@ -1,11 +1,11 @@
|
|||
import { Content, GenerateContentConfig, GoogleGenAI, Part } from '@google/genai';
|
||||
import { imageUrlToBase64 } from '@lobechat/utils';
|
||||
|
||||
import { convertGoogleAIUsage } from '../../core/usageConverters/google-ai';
|
||||
import { CreateImagePayload, CreateImageResponse } from '../../types/image';
|
||||
import { AgentRuntimeError } from '../../utils/createError';
|
||||
import { getModelPricing } from '../../utils/getModelPricing';
|
||||
import { parseGoogleErrorMessage } from '../../utils/googleErrorParser';
|
||||
import { imageUrlToBase64 } from '../../utils/imageToBase64';
|
||||
import { parseDataUri } from '../../utils/uriParser';
|
||||
|
||||
// Maximum number of images allowed for processing
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
// @vitest-environment edge-runtime
|
||||
// @vitest-environment node
|
||||
import { Type as SchemaType } from '@google/genai';
|
||||
import { describe, expect, it, vi } from 'vitest';
|
||||
|
||||
|
|
|
|||
|
|
@ -1,14 +1,11 @@
|
|||
// @vitest-environment edge-runtime
|
||||
// @vitest-environment node
|
||||
import { GenerateContentResponse, Tool } from '@google/genai';
|
||||
import { OpenAIChatMessage } from '@lobechat/model-runtime';
|
||||
import { ChatStreamPayload } from '@lobechat/types';
|
||||
import OpenAI from 'openai';
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { LOBE_ERROR_KEY } from '../../core/streams';
|
||||
import { AgentRuntimeErrorType } from '../../types/error';
|
||||
import * as debugStreamModule from '../../utils/debugStream';
|
||||
import * as imageToBase64Module from '../../utils/imageToBase64';
|
||||
import { LobeGoogleAI, resolveModelThinkingBudget } from './index';
|
||||
|
||||
const provider = 'google';
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
// @vitest-environment node
|
||||
import { LobeOpenAICompatibleRuntime } from '@lobechat/model-runtime';
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { LobeOpenAICompatibleRuntime } from '../../core/BaseAI';
|
||||
import { testProvider } from '../../providerTestUtils';
|
||||
import { AgentRuntimeErrorType } from '../../types/error';
|
||||
import { LobeGroq, params } from './index';
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
// @vitest-environment node
|
||||
import { LobeOpenAICompatibleRuntime } from '@lobechat/model-runtime';
|
||||
import { ModelProvider } from 'model-bank';
|
||||
import { beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { LobeOpenAICompatibleRuntime } from '../../core/BaseAI';
|
||||
import { testProvider } from '../../providerTestUtils';
|
||||
import { LobeHunyuanAI, params } from './index';
|
||||
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
// @vitest-environment edge-runtime
|
||||
// @vitest-environment node
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { CreateImageOptions } from '../../core/openaiCompatibleFactory';
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
// @vitest-environment node
|
||||
import { LobeOpenAICompatibleRuntime } from '@lobechat/model-runtime';
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { LobeOpenAICompatibleRuntime } from '../../core/BaseAI';
|
||||
import { testProvider } from '../../providerTestUtils';
|
||||
import { LobeMistralAI, params } from './index';
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
// @vitest-environment node
|
||||
import { LobeOpenAICompatibleRuntime } from '@lobechat/model-runtime';
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { LobeOpenAICompatibleRuntime } from '../../core/BaseAI';
|
||||
import { testProvider } from '../../providerTestUtils';
|
||||
import { LobeMoonshotAI, params } from './index';
|
||||
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
// @vitest-environment node
|
||||
import { LobeOpenAICompatibleRuntime } from '@lobechat/model-runtime';
|
||||
import { ModelProvider } from 'model-bank';
|
||||
import { Mock, afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { LobeOpenAICompatibleRuntime } from '../../core/BaseAI';
|
||||
import { testProvider } from '../../providerTestUtils';
|
||||
import models from './fixtures/models.json';
|
||||
import { LobeNovitaAI } from './index';
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
// @vitest-environment node
|
||||
import { imageUrlToBase64 } from '@lobechat/utils';
|
||||
import { ModelProvider } from 'model-bank';
|
||||
import { Ollama } from 'ollama/browser';
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
|
@ -9,6 +10,13 @@ import * as debugStreamModule from '../../utils/debugStream';
|
|||
import { LobeOllamaAI, params } from './index';
|
||||
|
||||
vi.mock('ollama/browser');
|
||||
vi.mock('@lobechat/utils', async () => {
|
||||
const actual = await vi.importActual('@lobechat/utils');
|
||||
return {
|
||||
...actual,
|
||||
imageUrlToBase64: vi.fn(),
|
||||
};
|
||||
});
|
||||
|
||||
// Mock the console.error to avoid polluting test output
|
||||
vi.spyOn(console, 'error').mockImplementation(() => {});
|
||||
|
|
@ -462,13 +470,13 @@ describe('LobeOllamaAI', () => {
|
|||
});
|
||||
|
||||
describe('buildOllamaMessages', () => {
|
||||
it('should convert OpenAIChatMessage array to OllamaMessage array', () => {
|
||||
it('should convert OpenAIChatMessage array to OllamaMessage array', async () => {
|
||||
const messages = [
|
||||
{ content: 'Hello', role: 'user' },
|
||||
{ content: 'Hi there!', role: 'assistant' },
|
||||
];
|
||||
|
||||
const ollamaMessages = ollamaAI['buildOllamaMessages'](messages as any);
|
||||
const ollamaMessages = await ollamaAI['buildOllamaMessages'](messages as any);
|
||||
|
||||
expect(ollamaMessages).toEqual([
|
||||
{ content: 'Hello', role: 'user' },
|
||||
|
|
@ -476,15 +484,15 @@ describe('LobeOllamaAI', () => {
|
|||
]);
|
||||
});
|
||||
|
||||
it('should handle empty message array', () => {
|
||||
it('should handle empty message array', async () => {
|
||||
const messages: any[] = [];
|
||||
|
||||
const ollamaMessages = ollamaAI['buildOllamaMessages'](messages);
|
||||
const ollamaMessages = await ollamaAI['buildOllamaMessages'](messages);
|
||||
|
||||
expect(ollamaMessages).toEqual([]);
|
||||
});
|
||||
|
||||
it('should handle multiple messages with different roles', () => {
|
||||
it('should handle multiple messages with different roles', async () => {
|
||||
const messages = [
|
||||
{ content: 'Hello', role: 'system' },
|
||||
{ content: 'Hi', role: 'user' },
|
||||
|
|
@ -492,7 +500,7 @@ describe('LobeOllamaAI', () => {
|
|||
{ content: 'How are you?', role: 'user' },
|
||||
];
|
||||
|
||||
const ollamaMessages = ollamaAI['buildOllamaMessages'](messages as any);
|
||||
const ollamaMessages = await ollamaAI['buildOllamaMessages'](messages as any);
|
||||
|
||||
expect(ollamaMessages).toHaveLength(4);
|
||||
expect(ollamaMessages[0].role).toBe('system');
|
||||
|
|
@ -503,26 +511,26 @@ describe('LobeOllamaAI', () => {
|
|||
});
|
||||
|
||||
describe('convertContentToOllamaMessage', () => {
|
||||
it('should convert string content to OllamaMessage', () => {
|
||||
it('should convert string content to OllamaMessage', async () => {
|
||||
const message = { content: 'Hello', role: 'user' };
|
||||
|
||||
const ollamaMessage = ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
const ollamaMessage = await ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
|
||||
expect(ollamaMessage).toEqual({ content: 'Hello', role: 'user' });
|
||||
});
|
||||
|
||||
it('should convert text content to OllamaMessage', () => {
|
||||
it('should convert text content to OllamaMessage', async () => {
|
||||
const message = {
|
||||
content: [{ type: 'text', text: 'Hello' }],
|
||||
role: 'user',
|
||||
};
|
||||
|
||||
const ollamaMessage = ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
const ollamaMessage = await ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
|
||||
expect(ollamaMessage).toEqual({ content: 'Hello', role: 'user' });
|
||||
});
|
||||
|
||||
it('should convert image_url content to OllamaMessage with images', () => {
|
||||
it('should convert image_url content to OllamaMessage with images', async () => {
|
||||
const message = {
|
||||
content: [
|
||||
{
|
||||
|
|
@ -533,7 +541,7 @@ describe('LobeOllamaAI', () => {
|
|||
role: 'user',
|
||||
};
|
||||
|
||||
const ollamaMessage = ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
const ollamaMessage = await ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
|
||||
expect(ollamaMessage).toEqual({
|
||||
content: '',
|
||||
|
|
@ -542,7 +550,7 @@ describe('LobeOllamaAI', () => {
|
|||
});
|
||||
});
|
||||
|
||||
it('should ignore invalid image_url content', () => {
|
||||
it('should ignore invalid image_url content', async () => {
|
||||
const message = {
|
||||
content: [
|
||||
{
|
||||
|
|
@ -553,7 +561,7 @@ describe('LobeOllamaAI', () => {
|
|||
role: 'user',
|
||||
};
|
||||
|
||||
const ollamaMessage = ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
const ollamaMessage = await ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
|
||||
expect(ollamaMessage).toEqual({
|
||||
content: '',
|
||||
|
|
@ -561,7 +569,7 @@ describe('LobeOllamaAI', () => {
|
|||
});
|
||||
});
|
||||
|
||||
it('should handle mixed text and image content', () => {
|
||||
it('should handle mixed text and image content', async () => {
|
||||
const message = {
|
||||
content: [
|
||||
{ type: 'text', text: 'First text' },
|
||||
|
|
@ -578,7 +586,7 @@ describe('LobeOllamaAI', () => {
|
|||
role: 'user',
|
||||
};
|
||||
|
||||
const ollamaMessage = ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
const ollamaMessage = await ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
|
||||
expect(ollamaMessage).toEqual({
|
||||
content: 'Second text', // Should keep latest text
|
||||
|
|
@ -587,13 +595,13 @@ describe('LobeOllamaAI', () => {
|
|||
});
|
||||
});
|
||||
|
||||
it('should handle content with empty text', () => {
|
||||
it('should handle content with empty text', async () => {
|
||||
const message = {
|
||||
content: [{ type: 'text', text: '' }],
|
||||
role: 'user',
|
||||
};
|
||||
|
||||
const ollamaMessage = ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
const ollamaMessage = await ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
|
||||
expect(ollamaMessage).toEqual({
|
||||
content: '',
|
||||
|
|
@ -601,7 +609,7 @@ describe('LobeOllamaAI', () => {
|
|||
});
|
||||
});
|
||||
|
||||
it('should handle content with only images (no text)', () => {
|
||||
it('should handle content with only images (no text)', async () => {
|
||||
const message = {
|
||||
content: [
|
||||
{
|
||||
|
|
@ -612,7 +620,7 @@ describe('LobeOllamaAI', () => {
|
|||
role: 'user',
|
||||
};
|
||||
|
||||
const ollamaMessage = ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
const ollamaMessage = await ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
|
||||
expect(ollamaMessage).toEqual({
|
||||
content: '',
|
||||
|
|
@ -621,7 +629,7 @@ describe('LobeOllamaAI', () => {
|
|||
});
|
||||
});
|
||||
|
||||
it('should handle multiple images without text', () => {
|
||||
it('should handle multiple images without text', async () => {
|
||||
const message = {
|
||||
content: [
|
||||
{
|
||||
|
|
@ -640,7 +648,7 @@ describe('LobeOllamaAI', () => {
|
|||
role: 'user',
|
||||
};
|
||||
|
||||
const ollamaMessage = ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
const ollamaMessage = await ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
|
||||
expect(ollamaMessage).toEqual({
|
||||
content: '',
|
||||
|
|
@ -649,7 +657,10 @@ describe('LobeOllamaAI', () => {
|
|||
});
|
||||
});
|
||||
|
||||
it('should ignore images with invalid data URIs', () => {
|
||||
it('should ignore images with invalid data URIs', async () => {
|
||||
// Mock imageUrlToBase64 to simulate conversion failure for external URLs
|
||||
vi.mocked(imageUrlToBase64).mockRejectedValue(new Error('Network error'));
|
||||
|
||||
const message = {
|
||||
content: [
|
||||
{ type: 'text', text: 'Hello' },
|
||||
|
|
@ -665,7 +676,7 @@ describe('LobeOllamaAI', () => {
|
|||
role: 'user',
|
||||
};
|
||||
|
||||
const ollamaMessage = ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
const ollamaMessage = await ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
|
||||
expect(ollamaMessage).toEqual({
|
||||
content: 'Hello',
|
||||
|
|
@ -674,7 +685,7 @@ describe('LobeOllamaAI', () => {
|
|||
});
|
||||
});
|
||||
|
||||
it('should handle complex interleaved content', () => {
|
||||
it('should handle complex interleaved content', async () => {
|
||||
const message = {
|
||||
content: [
|
||||
{ type: 'text', text: 'Text 1' },
|
||||
|
|
@ -692,7 +703,7 @@ describe('LobeOllamaAI', () => {
|
|||
role: 'user',
|
||||
};
|
||||
|
||||
const ollamaMessage = ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
const ollamaMessage = await ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
|
||||
expect(ollamaMessage).toEqual({
|
||||
content: 'Text 3', // Should keep latest text
|
||||
|
|
@ -701,7 +712,7 @@ describe('LobeOllamaAI', () => {
|
|||
});
|
||||
});
|
||||
|
||||
it('should handle assistant role with images', () => {
|
||||
it('should handle assistant role with images', async () => {
|
||||
const message = {
|
||||
content: [
|
||||
{ type: 'text', text: 'Here is the image' },
|
||||
|
|
@ -713,7 +724,7 @@ describe('LobeOllamaAI', () => {
|
|||
role: 'assistant',
|
||||
};
|
||||
|
||||
const ollamaMessage = ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
const ollamaMessage = await ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
|
||||
expect(ollamaMessage).toEqual({
|
||||
content: 'Here is the image',
|
||||
|
|
@ -722,13 +733,13 @@ describe('LobeOllamaAI', () => {
|
|||
});
|
||||
});
|
||||
|
||||
it('should handle system role with text', () => {
|
||||
it('should handle system role with text', async () => {
|
||||
const message = {
|
||||
content: [{ type: 'text', text: 'You are a helpful assistant' }],
|
||||
role: 'system',
|
||||
};
|
||||
|
||||
const ollamaMessage = ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
const ollamaMessage = await ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
|
||||
expect(ollamaMessage).toEqual({
|
||||
content: 'You are a helpful assistant',
|
||||
|
|
@ -736,13 +747,13 @@ describe('LobeOllamaAI', () => {
|
|||
});
|
||||
});
|
||||
|
||||
it('should handle empty content array', () => {
|
||||
it('should handle empty content array', async () => {
|
||||
const message = {
|
||||
content: [],
|
||||
role: 'user',
|
||||
};
|
||||
|
||||
const ollamaMessage = ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
const ollamaMessage = await ollamaAI['convertContentToOllamaMessage'](message as any);
|
||||
|
||||
expect(ollamaMessage).toEqual({
|
||||
content: '',
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import type { ChatModelCard } from '@lobechat/types';
|
||||
import { imageUrlToBase64 } from '@lobechat/utils';
|
||||
import { ModelProvider } from 'model-bank';
|
||||
import { Ollama, Tool } from 'ollama/browser';
|
||||
import { ClientOptions } from 'openai';
|
||||
|
|
@ -61,7 +62,7 @@ export class LobeOllamaAI implements LobeRuntimeAI {
|
|||
options?.signal?.addEventListener('abort', abort);
|
||||
|
||||
const response = await this.client.chat({
|
||||
messages: this.buildOllamaMessages(payload.messages),
|
||||
messages: await this.buildOllamaMessages(payload.messages),
|
||||
model: payload.model,
|
||||
options: {
|
||||
frequency_penalty: payload.frequency_penalty,
|
||||
|
|
@ -169,11 +170,13 @@ export class LobeOllamaAI implements LobeRuntimeAI {
|
|||
}
|
||||
};
|
||||
|
||||
private buildOllamaMessages(messages: OpenAIChatMessage[]) {
|
||||
return messages.map((message) => this.convertContentToOllamaMessage(message));
|
||||
private async buildOllamaMessages(messages: OpenAIChatMessage[]) {
|
||||
return Promise.all(messages.map((message) => this.convertContentToOllamaMessage(message)));
|
||||
}
|
||||
|
||||
private convertContentToOllamaMessage = (message: OpenAIChatMessage): OllamaMessage => {
|
||||
private convertContentToOllamaMessage = async (
|
||||
message: OpenAIChatMessage,
|
||||
): Promise<OllamaMessage> => {
|
||||
if (typeof message.content === 'string') {
|
||||
return { content: message.content, role: message.role };
|
||||
}
|
||||
|
|
@ -183,6 +186,9 @@ export class LobeOllamaAI implements LobeRuntimeAI {
|
|||
role: message.role,
|
||||
};
|
||||
|
||||
// Collect image processing tasks for parallel execution
|
||||
const imagePromises: Array<Promise<string | null> | string> = [];
|
||||
|
||||
for (const content of message.content) {
|
||||
switch (content.type) {
|
||||
case 'text': {
|
||||
|
|
@ -191,16 +197,34 @@ export class LobeOllamaAI implements LobeRuntimeAI {
|
|||
break;
|
||||
}
|
||||
case 'image_url': {
|
||||
const { base64 } = parseDataUri(content.image_url.url);
|
||||
const { base64, type } = parseDataUri(content.image_url.url);
|
||||
|
||||
// If already base64 format, use it directly
|
||||
if (base64) {
|
||||
ollamaMessage.images ??= [];
|
||||
ollamaMessage.images.push(base64);
|
||||
imagePromises.push(base64);
|
||||
}
|
||||
// If it's a URL, add async conversion task with error handling
|
||||
else if (type === 'url') {
|
||||
imagePromises.push(
|
||||
imageUrlToBase64(content.image_url.url)
|
||||
.then((result) => result.base64)
|
||||
.catch(() => null), // Silently ignore failed conversions
|
||||
);
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Process all images in parallel and filter out failed conversions
|
||||
if (imagePromises.length > 0) {
|
||||
const results = await Promise.all(imagePromises);
|
||||
const validImages = results.filter((img): img is string => img !== null);
|
||||
if (validImages.length > 0) {
|
||||
ollamaMessage.images = validImages;
|
||||
}
|
||||
}
|
||||
|
||||
return ollamaMessage;
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
// @vitest-environment node
|
||||
import { LobeOpenAICompatibleRuntime } from '@lobechat/model-runtime';
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { LobeOpenAICompatibleRuntime } from '../../core/BaseAI';
|
||||
import { testProvider } from '../../providerTestUtils';
|
||||
import { LobeOpenRouterAI, params } from './index';
|
||||
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
// @vitest-environment node
|
||||
import { LobeOpenAICompatibleRuntime } from '@lobechat/model-runtime';
|
||||
import { ModelProvider } from 'model-bank';
|
||||
import { beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { LobeOpenAICompatibleRuntime } from '../../core/BaseAI';
|
||||
import { testProvider } from '../../providerTestUtils';
|
||||
import { LobePerplexityAI } from './index';
|
||||
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
// @vitest-environment node
|
||||
import { LobeOpenAICompatibleRuntime } from '@lobechat/model-runtime';
|
||||
import { ModelProvider } from 'model-bank';
|
||||
import { Mock, afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { LobeOpenAICompatibleRuntime } from '../../core/BaseAI';
|
||||
import { testProvider } from '../../providerTestUtils';
|
||||
import models from './fixtures/models.json';
|
||||
import { LobePPIOAI } from './index';
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
// @vitest-environment edge-runtime
|
||||
// @vitest-environment node
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { CreateImageOptions } from '../../core/openaiCompatibleFactory';
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
// @vitest-environment node
|
||||
import { LobeOpenAICompatibleRuntime } from '@lobechat/model-runtime';
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { LobeOpenAICompatibleRuntime } from '../../core/BaseAI';
|
||||
import { testProvider } from '../../providerTestUtils';
|
||||
import { LobeSearch1API, params } from './index';
|
||||
|
||||
|
|
|
|||
|
|
@ -1,3 +1,4 @@
|
|||
import { imageUrlToBase64 } from '@lobechat/utils';
|
||||
import createDebug from 'debug';
|
||||
import { RuntimeImageGenParamsValue } from 'model-bank';
|
||||
|
||||
|
|
@ -5,7 +6,6 @@ import { CreateImageOptions } from '../../core/openaiCompatibleFactory';
|
|||
import { CreateImagePayload, CreateImageResponse } from '../../types';
|
||||
import { AgentRuntimeErrorType } from '../../types/error';
|
||||
import { AgentRuntimeError } from '../../utils/createError';
|
||||
import { imageUrlToBase64 } from '../../utils/imageToBase64';
|
||||
import { parseDataUri } from '../../utils/uriParser';
|
||||
|
||||
const log = createDebug('lobe-image:siliconcloud');
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
// @vitest-environment node
|
||||
import { LobeOpenAICompatibleRuntime } from '@lobechat/model-runtime';
|
||||
import { ModelProvider } from 'model-bank';
|
||||
import OpenAI from 'openai';
|
||||
import { Mock, afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { LobeOpenAICompatibleRuntime } from '../../core/BaseAI';
|
||||
import { testProvider } from '../../providerTestUtils';
|
||||
import * as debugStreamModule from '../../utils/debugStream';
|
||||
import { LobeTaichuAI } from './index';
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
// @vitest-environment node
|
||||
import { LobeOpenAICompatibleRuntime } from '@lobechat/model-runtime';
|
||||
import { ModelProvider } from 'model-bank';
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { LobeOpenAICompatibleRuntime } from '../../core/BaseAI';
|
||||
import { testProvider } from '../../providerTestUtils';
|
||||
import { LobeWenxinAI, params } from './index';
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
// @vitest-environment node
|
||||
import { LobeOpenAICompatibleRuntime } from '@lobechat/model-runtime';
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { LobeOpenAICompatibleRuntime } from '../../core/BaseAI';
|
||||
import { testProvider } from '../../providerTestUtils';
|
||||
import { LobeZhipuAI, params } from './index';
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
import { AgentRuntimeErrorType } from '@lobechat/model-runtime';
|
||||
import { ChatErrorType } from '@lobechat/types';
|
||||
import { describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { AgentRuntimeErrorType } from '../types/error';
|
||||
import { createErrorResponse } from './errorResponse';
|
||||
|
||||
describe('createErrorResponse', () => {
|
||||
|
|
|
|||
|
|
@ -1,91 +0,0 @@
|
|||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { imageToBase64, imageUrlToBase64 } from './imageToBase64';
|
||||
|
||||
describe('imageToBase64', () => {
|
||||
let mockImage: HTMLImageElement;
|
||||
let mockCanvas: HTMLCanvasElement;
|
||||
let mockContext: CanvasRenderingContext2D;
|
||||
|
||||
beforeEach(() => {
|
||||
mockImage = {
|
||||
width: 200,
|
||||
height: 100,
|
||||
} as HTMLImageElement;
|
||||
|
||||
mockContext = {
|
||||
drawImage: vi.fn(),
|
||||
} as unknown as CanvasRenderingContext2D;
|
||||
|
||||
mockCanvas = {
|
||||
width: 0,
|
||||
height: 0,
|
||||
getContext: vi.fn().mockReturnValue(mockContext),
|
||||
toDataURL: vi.fn().mockReturnValue('data:image/webp;base64,mockBase64Data'),
|
||||
} as unknown as HTMLCanvasElement;
|
||||
|
||||
vi.spyOn(document, 'createElement').mockReturnValue(mockCanvas);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
vi.restoreAllMocks();
|
||||
});
|
||||
|
||||
it('should convert image to base64 with correct size and type', () => {
|
||||
const result = imageToBase64({ img: mockImage, size: 100, type: 'image/jpeg' });
|
||||
|
||||
expect(document.createElement).toHaveBeenCalledWith('canvas');
|
||||
expect(mockCanvas.width).toBe(100);
|
||||
expect(mockCanvas.height).toBe(100);
|
||||
expect(mockCanvas.getContext).toHaveBeenCalledWith('2d');
|
||||
expect(mockContext.drawImage).toHaveBeenCalledWith(mockImage, 50, 0, 100, 100, 0, 0, 100, 100);
|
||||
expect(mockCanvas.toDataURL).toHaveBeenCalledWith('image/jpeg');
|
||||
expect(result).toBe('data:image/webp;base64,mockBase64Data');
|
||||
});
|
||||
|
||||
it('should use default type when not specified', () => {
|
||||
imageToBase64({ img: mockImage, size: 100 });
|
||||
expect(mockCanvas.toDataURL).toHaveBeenCalledWith('image/webp');
|
||||
});
|
||||
|
||||
it('should handle taller images correctly', () => {
|
||||
mockImage.width = 100;
|
||||
mockImage.height = 200;
|
||||
imageToBase64({ img: mockImage, size: 100 });
|
||||
expect(mockContext.drawImage).toHaveBeenCalledWith(mockImage, 0, 50, 100, 100, 0, 0, 100, 100);
|
||||
});
|
||||
});
|
||||
|
||||
describe('imageUrlToBase64', () => {
|
||||
const mockFetch = vi.fn();
|
||||
const mockArrayBuffer = new ArrayBuffer(8);
|
||||
|
||||
beforeEach(() => {
|
||||
global.fetch = mockFetch;
|
||||
global.btoa = vi.fn().mockReturnValue('mockBase64String');
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
vi.restoreAllMocks();
|
||||
});
|
||||
|
||||
it('should convert image URL to base64 string', async () => {
|
||||
mockFetch.mockResolvedValue({
|
||||
arrayBuffer: () => Promise.resolve(mockArrayBuffer),
|
||||
blob: () => Promise.resolve(new Blob([mockArrayBuffer], { type: 'image/jpg' })),
|
||||
});
|
||||
|
||||
const result = await imageUrlToBase64('https://example.com/image.jpg');
|
||||
|
||||
expect(mockFetch).toHaveBeenCalledWith('https://example.com/image.jpg');
|
||||
expect(global.btoa).toHaveBeenCalled();
|
||||
expect(result).toEqual({ base64: 'mockBase64String', mimeType: 'image/jpg' });
|
||||
});
|
||||
|
||||
it('should throw an error when fetch fails', async () => {
|
||||
const mockError = new Error('Fetch failed');
|
||||
mockFetch.mockRejectedValue(mockError);
|
||||
|
||||
await expect(imageUrlToBase64('https://example.com/image.jpg')).rejects.toThrow('Fetch failed');
|
||||
});
|
||||
});
|
||||
|
|
@ -1,62 +0,0 @@
|
|||
export const imageToBase64 = ({
|
||||
size,
|
||||
img,
|
||||
type = 'image/webp',
|
||||
}: {
|
||||
img: HTMLImageElement;
|
||||
size: number;
|
||||
type?: string;
|
||||
}) => {
|
||||
const canvas = document.createElement('canvas');
|
||||
const ctx = canvas.getContext('2d') as CanvasRenderingContext2D;
|
||||
let startX = 0;
|
||||
let startY = 0;
|
||||
|
||||
if (img.width > img.height) {
|
||||
startX = (img.width - img.height) / 2;
|
||||
} else {
|
||||
startY = (img.height - img.width) / 2;
|
||||
}
|
||||
|
||||
canvas.width = size;
|
||||
canvas.height = size;
|
||||
|
||||
ctx.drawImage(
|
||||
img,
|
||||
startX,
|
||||
startY,
|
||||
Math.min(img.width, img.height),
|
||||
Math.min(img.width, img.height),
|
||||
0,
|
||||
0,
|
||||
size,
|
||||
size,
|
||||
);
|
||||
|
||||
return canvas.toDataURL(type);
|
||||
};
|
||||
|
||||
export const imageUrlToBase64 = async (
|
||||
imageUrl: string,
|
||||
): Promise<{ base64: string; mimeType: string }> => {
|
||||
try {
|
||||
const res = await fetch(imageUrl);
|
||||
const blob = await res.blob();
|
||||
const arrayBuffer = await blob.arrayBuffer();
|
||||
|
||||
const base64 =
|
||||
typeof btoa === 'function'
|
||||
? btoa(
|
||||
new Uint8Array(arrayBuffer).reduce(
|
||||
(data, byte) => data + String.fromCharCode(byte),
|
||||
'',
|
||||
),
|
||||
)
|
||||
: Buffer.from(arrayBuffer).toString('base64');
|
||||
|
||||
return { base64, mimeType: blob.type };
|
||||
} catch (error) {
|
||||
console.error('Error converting image to base64:', error);
|
||||
throw error;
|
||||
}
|
||||
};
|
||||
14
packages/ssrf-safe-fetch/index.browser.ts
Normal file
14
packages/ssrf-safe-fetch/index.browser.ts
Normal file
|
|
@ -0,0 +1,14 @@
|
|||
/**
|
||||
* Browser version of SSRF-safe fetch
|
||||
* In browser environments, we simply use the native fetch API
|
||||
* as SSRF attacks are not applicable in client-side code
|
||||
*/
|
||||
|
||||
/**
|
||||
* Browser-safe fetch implementation
|
||||
* Uses native fetch API in browser environments
|
||||
*/
|
||||
// eslint-disable-next-line no-undef
|
||||
export const ssrfSafeFetch = async (url: string, options?: RequestInit): Promise<Response> => {
|
||||
return fetch(url, options);
|
||||
};
|
||||
|
|
@ -2,7 +2,14 @@
|
|||
"name": "ssrf-safe-fetch",
|
||||
"version": "1.0.0",
|
||||
"private": true,
|
||||
"description": "",
|
||||
"description": "SSRF-safe fetch implementation with browser/node conditional exports",
|
||||
"exports": {
|
||||
".": {
|
||||
"browser": "./index.browser.ts",
|
||||
"node": "./index.ts",
|
||||
"default": "./index.ts"
|
||||
}
|
||||
},
|
||||
"main": "index.ts",
|
||||
"scripts": {
|
||||
"test": "vitest run"
|
||||
|
|
|
|||
|
|
@ -36,23 +36,30 @@ export const imageToBase64 = ({
|
|||
return canvas.toDataURL(type);
|
||||
};
|
||||
|
||||
/**
|
||||
* Convert image URL to base64
|
||||
* Uses SSRF-safe fetch on server-side to prevent SSRF attacks
|
||||
*/
|
||||
export const imageUrlToBase64 = async (
|
||||
imageUrl: string,
|
||||
): Promise<{ base64: string; mimeType: string }> => {
|
||||
try {
|
||||
const res = await fetch(imageUrl);
|
||||
const isServer = typeof window === 'undefined';
|
||||
|
||||
// Use SSRF-safe fetch on server-side to prevent SSRF attacks
|
||||
const res = isServer
|
||||
? await import('ssrf-safe-fetch').then((m) => m.ssrfSafeFetch(imageUrl))
|
||||
: await fetch(imageUrl);
|
||||
|
||||
const blob = await res.blob();
|
||||
const arrayBuffer = await blob.arrayBuffer();
|
||||
|
||||
const base64 =
|
||||
typeof btoa === 'function'
|
||||
? btoa(
|
||||
new Uint8Array(arrayBuffer).reduce(
|
||||
(data, byte) => data + String.fromCharCode(byte),
|
||||
'',
|
||||
),
|
||||
)
|
||||
: Buffer.from(arrayBuffer).toString('base64');
|
||||
// Client-side uses btoa, server-side uses Buffer
|
||||
const base64 = isServer
|
||||
? Buffer.from(arrayBuffer).toString('base64')
|
||||
: btoa(
|
||||
new Uint8Array(arrayBuffer).reduce((data, byte) => data + String.fromCharCode(byte), ''),
|
||||
);
|
||||
|
||||
return { base64, mimeType: blob.type };
|
||||
} catch (error) {
|
||||
|
|
|
|||
|
|
@ -4,9 +4,9 @@ export * from './detectChinese';
|
|||
export * from './format';
|
||||
export * from './imageToBase64';
|
||||
export * from './keyboard';
|
||||
export * from './merge';
|
||||
export * from './number';
|
||||
export * from './object';
|
||||
export * from './parseModels';
|
||||
export * from './pricing';
|
||||
export * from './safeParseJSON';
|
||||
export * from './sleep';
|
||||
|
|
|
|||
|
|
@ -1,3 +1,4 @@
|
|||
import { getMessageError } from '@lobechat/fetch-sse';
|
||||
import { ChatMessageError } from '@lobechat/types';
|
||||
import { AudioPlayer } from '@lobehub/tts/react';
|
||||
import { Alert, Button, Highlighter, Select, SelectProps } from '@lobehub/ui';
|
||||
|
|
@ -9,7 +10,6 @@ import { Flexbox } from 'react-layout-kit';
|
|||
|
||||
import { useTTS } from '@/hooks/useTTS';
|
||||
import { TTSServer } from '@/types/agent';
|
||||
import { getMessageError } from '@/utils/fetch';
|
||||
|
||||
interface SelectWithTTSPreviewProps extends SelectProps {
|
||||
server: TTSServer;
|
||||
|
|
|
|||
|
|
@ -1,3 +1,4 @@
|
|||
import { MessageTextChunk } from '@lobechat/fetch-sse';
|
||||
import {
|
||||
chainPickEmoji,
|
||||
chainSummaryAgentName,
|
||||
|
|
@ -16,7 +17,6 @@ import { systemAgentSelectors } from '@/store/user/slices/settings/selectors';
|
|||
import { LobeAgentChatConfig, LobeAgentConfig } from '@/types/agent';
|
||||
import { MetaData } from '@/types/meta';
|
||||
import { SystemAgentItem } from '@/types/user/settings';
|
||||
import { MessageTextChunk } from '@/utils/fetch';
|
||||
import { merge } from '@/utils/merge';
|
||||
import { setNamespace } from '@/utils/storeDebug';
|
||||
|
||||
|
|
|
|||
|
|
@ -1,3 +1,4 @@
|
|||
import { getMessageError } from '@lobechat/fetch-sse';
|
||||
import { ChatMessageError } from '@lobechat/types';
|
||||
import { SpeechRecognitionOptions, useSpeechRecognition } from '@lobehub/tts/react';
|
||||
import isEqual from 'fast-deep-equal';
|
||||
|
|
@ -13,7 +14,6 @@ import { useGlobalStore } from '@/store/global';
|
|||
import { globalGeneralSelectors } from '@/store/global/selectors';
|
||||
import { useUserStore } from '@/store/user';
|
||||
import { settingsSelectors } from '@/store/user/selectors';
|
||||
import { getMessageError } from '@/utils/fetch';
|
||||
|
||||
import CommonSTT from './common';
|
||||
|
||||
|
|
|
|||
|
|
@ -1,3 +1,4 @@
|
|||
import { getMessageError } from '@lobechat/fetch-sse';
|
||||
import { ChatMessageError } from '@lobechat/types';
|
||||
import { getRecordMineType } from '@lobehub/tts';
|
||||
import { OpenAISTTOptions, useOpenAISTT } from '@lobehub/tts/react';
|
||||
|
|
@ -16,7 +17,6 @@ import { useGlobalStore } from '@/store/global';
|
|||
import { globalGeneralSelectors } from '@/store/global/selectors';
|
||||
import { useUserStore } from '@/store/user';
|
||||
import { settingsSelectors } from '@/store/user/selectors';
|
||||
import { getMessageError } from '@/utils/fetch';
|
||||
|
||||
import CommonSTT from './common';
|
||||
|
||||
|
|
|
|||
|
|
@ -1,3 +1,4 @@
|
|||
import { getMessageError } from '@lobechat/fetch-sse';
|
||||
import { ChatMessageError, ChatTTS } from '@lobechat/types';
|
||||
import { memo, useCallback, useEffect, useState } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
|
|
@ -5,7 +6,6 @@ import { useTranslation } from 'react-i18next';
|
|||
import { useTTS } from '@/hooks/useTTS';
|
||||
import { useChatStore } from '@/store/chat';
|
||||
import { useFileStore } from '@/store/file';
|
||||
import { getMessageError } from '@/utils/fetch';
|
||||
|
||||
import Player from './Player';
|
||||
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ vi.mock('@/envs/llm', () => ({
|
|||
})),
|
||||
}));
|
||||
|
||||
vi.mock('@/utils/parseModels', () => ({
|
||||
vi.mock('@/utils/server/parseModels', () => ({
|
||||
extractEnabledModels: vi.fn(async (providerId: string, modelString?: string) => {
|
||||
if (!modelString) return undefined;
|
||||
return [`${providerId}-model-1`, `${providerId}-model-2`];
|
||||
|
|
@ -98,7 +98,7 @@ describe('genServerAiProvidersConfig', () => {
|
|||
it('should use environment variables for model lists', async () => {
|
||||
process.env.OPENAI_MODEL_LIST = '+gpt-4,+gpt-3.5-turbo';
|
||||
|
||||
const { extractEnabledModels } = vi.mocked(await import('@/utils/parseModels'));
|
||||
const { extractEnabledModels } = vi.mocked(await import('@/utils/server/parseModels'));
|
||||
extractEnabledModels.mockResolvedValue(['gpt-4', 'gpt-3.5-turbo']);
|
||||
|
||||
const result = await genServerAiProvidersConfig({});
|
||||
|
|
@ -116,7 +116,7 @@ describe('genServerAiProvidersConfig', () => {
|
|||
|
||||
process.env.CUSTOM_OPENAI_MODELS = '+custom-model';
|
||||
|
||||
const { extractEnabledModels } = vi.mocked(await import('@/utils/parseModels'));
|
||||
const { extractEnabledModels } = vi.mocked(await import('@/utils/server/parseModels'));
|
||||
|
||||
await genServerAiProvidersConfig(specificConfig);
|
||||
|
||||
|
|
@ -133,7 +133,7 @@ describe('genServerAiProvidersConfig', () => {
|
|||
process.env.OPENAI_MODEL_LIST = '+gpt-4->deployment1';
|
||||
|
||||
const { extractEnabledModels, transformToAiModelList } = vi.mocked(
|
||||
await import('@/utils/parseModels'),
|
||||
await import('@/utils/server/parseModels'),
|
||||
);
|
||||
|
||||
await genServerAiProvidersConfig(specificConfig);
|
||||
|
|
@ -206,7 +206,7 @@ describe('genServerAiProvidersConfig Error Handling', () => {
|
|||
getLLMConfig: vi.fn(() => ({})),
|
||||
}));
|
||||
|
||||
vi.doMock('@/utils/parseModels', () => ({
|
||||
vi.doMock('@/utils/server/parseModels', () => ({
|
||||
extractEnabledModels: vi.fn(async () => undefined),
|
||||
transformToAiModelList: vi.fn(async () => []),
|
||||
}));
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
import { ProviderConfig } from '@lobechat/types';
|
||||
import { extractEnabledModels, transformToAiModelList } from '@lobechat/utils';
|
||||
import { AiFullModelCard, ModelProvider } from 'model-bank';
|
||||
import * as AiModels from 'model-bank';
|
||||
|
||||
import { getLLMConfig } from '@/envs/llm';
|
||||
import { extractEnabledModels, transformToAiModelList } from '@/utils/server/parseModels';
|
||||
|
||||
interface ProviderSpecificConfig {
|
||||
enabled?: boolean;
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ vi.stubGlobal(
|
|||
);
|
||||
|
||||
// Mock image processing utilities
|
||||
vi.mock('@/utils/fetch', async (importOriginal) => {
|
||||
vi.mock('@lobechat/fetch-sse', async (importOriginal) => {
|
||||
const module = await importOriginal();
|
||||
|
||||
return { ...(module as any), getMessageError: vi.fn() };
|
||||
|
|
@ -988,7 +988,7 @@ describe('ChatService', () => {
|
|||
|
||||
beforeEach(async () => {
|
||||
// Setup common fetchSSE mock for getChatCompletion tests
|
||||
const { fetchSSE } = await import('@/utils/fetch');
|
||||
const { fetchSSE } = await import('@lobechat/fetch-sse');
|
||||
mockFetchSSE = vi.fn().mockResolvedValue(new Response('mock response'));
|
||||
vi.mocked(fetchSSE).mockImplementation(mockFetchSSE);
|
||||
});
|
||||
|
|
@ -1049,7 +1049,7 @@ describe('ChatService', () => {
|
|||
|
||||
it('should return InvalidAccessCode error when enableFetchOnClient is true and auth is enabled but user is not signed in', async () => {
|
||||
// Mock fetchSSE to call onErrorHandle with the error
|
||||
const { fetchSSE } = await import('@/utils/fetch');
|
||||
const { fetchSSE } = await import('@lobechat/fetch-sse');
|
||||
|
||||
const mockFetchSSEWithError = vi.fn().mockImplementation((url, options) => {
|
||||
// Simulate the error being caught and passed to onErrorHandle
|
||||
|
|
@ -1211,8 +1211,8 @@ vi.mock('../_auth', async (importOriginal) => {
|
|||
describe('ChatService private methods', () => {
|
||||
describe('getChatCompletion', () => {
|
||||
it('should merge responseAnimation styles correctly', async () => {
|
||||
const { fetchSSE } = await import('@/utils/fetch');
|
||||
vi.mock('@/utils/fetch', async (importOriginal) => {
|
||||
const { fetchSSE } = await import('@lobechat/fetch-sse');
|
||||
vi.mock('@lobechat/fetch-sse', async (importOriginal) => {
|
||||
const module = await importOriginal();
|
||||
return {
|
||||
...(module as any),
|
||||
|
|
|
|||
|
|
@ -38,7 +38,7 @@ vi.stubGlobal(
|
|||
vi.fn(() => Promise.resolve(new Response(JSON.stringify({ some: 'data' })))),
|
||||
);
|
||||
|
||||
vi.mock('@/utils/fetch', async (importOriginal) => {
|
||||
vi.mock('@lobechat/fetch-sse', async (importOriginal) => {
|
||||
const module = await importOriginal();
|
||||
|
||||
return { ...(module as any), getMessageError: vi.fn() };
|
||||
|
|
|
|||
|
|
@ -1,3 +1,9 @@
|
|||
import {
|
||||
FetchSSEOptions,
|
||||
fetchSSE,
|
||||
getMessageError,
|
||||
standardizeAnimationStyle,
|
||||
} from '@lobechat/fetch-sse';
|
||||
import { AgentRuntimeError, ChatCompletionErrorPayload } from '@lobechat/model-runtime';
|
||||
import { ChatErrorType, TracePayload, TraceTagMap, UIChatMessage } from '@lobechat/types';
|
||||
import { PluginRequestPayload, createHeadersWithPluginSettings } from '@lobehub/chat-plugin-sdk';
|
||||
|
|
@ -25,12 +31,6 @@ import {
|
|||
import type { ChatStreamPayload, OpenAIChatMessage } from '@/types/openai/chat';
|
||||
import { fetchWithInvokeStream } from '@/utils/electron/desktopRemoteRPCFetch';
|
||||
import { createErrorResponse } from '@/utils/errorResponse';
|
||||
import {
|
||||
FetchSSEOptions,
|
||||
fetchSSE,
|
||||
getMessageError,
|
||||
standardizeAnimationStyle,
|
||||
} from '@/utils/fetch';
|
||||
import { createTraceHeader, getTraceId } from '@/utils/trace';
|
||||
|
||||
import { createHeaderWithAuth } from '../_auth';
|
||||
|
|
|
|||
|
|
@ -1,7 +1,6 @@
|
|||
import { FetchSSEOptions } from '@lobechat/fetch-sse';
|
||||
import { TracePayload } from '@lobechat/types';
|
||||
|
||||
import { FetchSSEOptions } from '@/utils/fetch';
|
||||
|
||||
export interface FetchOptions extends FetchSSEOptions {
|
||||
historySummary?: string;
|
||||
signal?: AbortSignal | undefined;
|
||||
|
|
|
|||
|
|
@ -1,7 +1,8 @@
|
|||
import { getMessageError } from '@lobechat/fetch-sse';
|
||||
|
||||
import { createHeaderWithAuth } from '@/services/_auth';
|
||||
import { aiProviderSelectors, getAiInfraStoreState } from '@/store/aiInfra';
|
||||
import { ChatModelCard } from '@/types/llm';
|
||||
import { getMessageError } from '@/utils/fetch';
|
||||
|
||||
import { API_ENDPOINTS } from './_url';
|
||||
import { initializeWithClientStore } from './chat/clientModelRuntime';
|
||||
|
|
|
|||
|
|
@ -1,10 +1,10 @@
|
|||
import { isDesktop } from '@lobechat/const';
|
||||
import { ProxyTRPCRequestParams, dispatch, streamInvoke } from '@lobechat/electron-client-ipc';
|
||||
import { getRequestBody, headersToRecord } from '@lobechat/fetch-sse';
|
||||
import debug from 'debug';
|
||||
|
||||
import { getElectronStoreState } from '@/store/electron';
|
||||
import { electronSyncSelectors } from '@/store/electron/selectors';
|
||||
import { getRequestBody, headersToRecord } from '@/utils/fetch';
|
||||
|
||||
const log = debug('utils:desktopRemoteRPCFetch');
|
||||
|
||||
|
|
@ -1,9 +1,8 @@
|
|||
import { getModelPropertyWithFallback } from '@lobechat/model-runtime';
|
||||
import { merge } from '@lobechat/utils';
|
||||
import { produce } from 'immer';
|
||||
import { AiFullModelCard, AiModelType } from 'model-bank';
|
||||
|
||||
import { merge } from './merge';
|
||||
|
||||
/**
|
||||
* Parse model string to add or remove models.
|
||||
*/
|
||||
|
|
@ -16,6 +16,8 @@ export default defineConfig({
|
|||
// TODO: after refactor the errorResponse, we can remove it
|
||||
'@/utils/errorResponse': resolve(__dirname, './src/utils/errorResponse'),
|
||||
'@/utils/unzipFile': resolve(__dirname, './src/utils/unzipFile'),
|
||||
'@/utils/server': resolve(__dirname, './src/utils/server'),
|
||||
'@/utils/electron': resolve(__dirname, './src/utils/electron'),
|
||||
'@/utils': resolve(__dirname, './packages/utils/src'),
|
||||
'@/types': resolve(__dirname, './packages/types/src'),
|
||||
'@/const': resolve(__dirname, './packages/const/src'),
|
||||
|
|
|
|||
Loading…
Reference in a new issue