♻️ refactor: refactor with chat docs site (#1309)

* 📝 docs: add package.json

* 🔧 chore: update config

* 🔧 chore: update config

* 📝 docs: update docs

* 📝 docs: add en-US docs

* 📝 docs: update docs

* 📝 docs: fix docs url

* 🔧 chore: add docs rewrites

* 🔧 chore: add docs rewrites

* 🔧 chore: fix docs rewrites

* 💄 style: update docs link

* 🚚 chore: move to contributing docs

* 🔧 chore: update contributing ci workflow

* 💄 style: update docs link
This commit is contained in:
Arvin Xu 2024-02-18 21:39:04 +08:00 committed by GitHub
parent c831b978f9
commit c131fa68f0
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
125 changed files with 4267 additions and 2281 deletions

View file

@ -21,7 +21,7 @@ jobs:
token: ${{ secrets.GH_TOKEN }}
inactive-label: 'Inactive'
inactive-day: 30
issue-close-require:
permissions:
issues: write # for actions-cool/issues-helper to update issues
@ -36,7 +36,7 @@ jobs:
labels: '✅ Fixed'
inactive-day: 3
body: |
👋 @{{ github.event.issue.user.login }}
👋 @{{ author }}
<br/>
Since the issue was labeled with `✅ Fixed`, but no response in 3 days. This issue will be closed. If you have any questions, you can comment and reply.\
由于该 issue 被标记为已修复,同时 3 天未收到回应。现关闭 issue若有任何问题可评论回复。
@ -48,7 +48,7 @@ jobs:
labels: '🤔 Need Reproduce'
inactive-day: 3
body: |
👋 @{{ github.event.issue.user.login }}
👋 @{{ author }}
<br/>
Since the issue was labeled with `🤔 Need Reproduce`, but no response in 3 days. This issue will be closed. If you have any questions, you can comment and reply.\
由于该 issue 被标记为需要更多信息,却 3 天未收到回应。现关闭 issue若有任何问题可评论回复。

View file

@ -1,8 +1,8 @@
name: Issue Translate
on:
issue_comment:
on:
issue_comment:
types: [created]
issues:
issues:
types: [opened]
jobs:

View file

@ -4,7 +4,7 @@ on:
workflow_dispatch:
push:
paths:
- 'docs/**'
- 'contributing/**'
branches:
- main

View file

@ -22,13 +22,13 @@ module.exports = defineConfig({
'vi-VN',
],
temperature: 0,
modelName: 'gpt-3.5-turbo-1106',
modelName: 'gpt-3.5-turbo-0125',
splitToken: 1024,
experimental: {
jsonMode: true,
},
markdown: {
entry: ['./README.zh-CN.md', './docs/**/*.zh-CN.md'],
entry: ['./README.zh-CN.md', './docs/**/*.zh-CN.md', './docs/**/*.zh-CN.mdx'],
entryLocale: 'zh-CN',
entryExtension: '.zh-CN.md',
outputLocales: ['en-US'],

View file

@ -11,7 +11,7 @@
LobeChat is an open-source, high-performance chatbot framework<br/>that supports speech synthesis, multimodal, and extensible ([Function Call][fc-link]) plugin system. <br/>
Supports one-click free deployment of your private ChatGPT/LLM web application.
**English** · [简体中文](./README.zh-CN.md) · [Changelog](./CHANGELOG.md) · [Wiki][github-wiki-link] · [Report Bug][github-issues-link] · [Request Feature][github-issues-link]
**English** · [简体中文](./README.zh-CN.md) · [Changelog](./CHANGELOG.md) · [Documents][github-document-link] · [Report Bug][github-issues-link] · [Request Feature][github-issues-link]
<!-- SHIELD GROUP -->
@ -169,7 +169,7 @@ such as automatically fetching the latest news headlines to provide users with i
Moreover, these plugins are not limited to news aggregation but can also extend to other practical functions, such as quick document retrieval,
e-commerce platform data access, and various third-party services.
> Learn More in [📘 Plugin Usage](https://github.com/lobehub/lobe-chat/wiki/Plugins)
> Learn More in [📘 Plugin Usage](https://chat-docs.lobehub.com/en/usage/plugins/basic)
<video controls src="https://github.com/lobehub/lobe-chat/assets/28616219/f29475a3-f346-4196-a435-41a6373ab9e2" muted="false"></video>
@ -622,6 +622,7 @@ This project is [MIT](./LICENSE) licensed.
[github-action-test-shield]: https://img.shields.io/github/actions/workflow/status/lobehub/lobe-chat/test.yml?label=test&labelColor=black&logo=githubactions&logoColor=white&style=flat-square
[github-contributors-link]: https://github.com/lobehub/lobe-chat/graphs/contributors
[github-contributors-shield]: https://img.shields.io/github/contributors/lobehub/lobe-chat?color=c4f042&labelColor=black&style=flat-square
[github-document-link]: https://chat-docs.lobehub.com/en
[github-forks-link]: https://github.com/lobehub/lobe-chat/network/members
[github-forks-shield]: https://img.shields.io/github/forks/lobehub/lobe-chat?color=8ae8ff&labelColor=black&style=flat-square
[github-issues-link]: https://github.com/lobehub/lobe-chat/issues
@ -637,7 +638,6 @@ This project is [MIT](./LICENSE) licensed.
[github-stars-shield]: https://img.shields.io/github/stars/lobehub/lobe-chat?color=ffcb47&labelColor=black&style=flat-square
[github-trending-shield]: https://trendshift.io/api/badge/repositories/2256
[github-trending-url]: https://trendshift.io/repositories/2256
[github-wiki-link]: https://github.com/lobehub/lobe-chat/wiki
[issues-link]: https://img.shields.io/github/issues/lobehub/lobe-chat.svg?style=flat
[lobe-assets-github]: https://github.com/lobehub/lobe-assets
[lobe-chat-plugins]: https://github.com/lobehub/lobe-chat-plugins

View file

@ -10,7 +10,7 @@
LobeChat 是开源的高性能聊天机器人框架,支持语音合成、多模态、可扩展的([Function Call][fc-link])插件系统。<br/> 支持一键免费部署私人 ChatGPT/LLM 网页应用程序。
[English](./README.md) · **简体中文** · [更新日志](./CHANGELOG.md) · [文档][github-wiki-link] · [报告问题][github-issues-link] · [请求功能][github-issues-link]
[English](./README.md) · **简体中文** · [更新日志](./CHANGELOG.md) · [文档][github-document-link] · [报告问题][github-issues-link] · [请求功能][github-issues-link]
<!-- SHIELD GROUP -->
@ -155,7 +155,7 @@ LobeChat 支持文字转语音Text-to-SpeechTTS和语音转文字Spe
LobeChat 的插件生态系统是其核心功能的重要扩展,它极大地增强了 ChatGPT 的实用性和灵活性。通过利用插件ChatGPT 能够实现实时信息的获取和处理,例如自动获取最新新闻头条,为用户提供即时且相关的资讯。
此外,这些插件不仅局限于新闻聚合,还可以扩展到其他实用的功能,如快速检索文档、获取电商平台数据、以及其他各式各样的第三方服务。
> 通过 Wiki 了解更多 [📘 插件使用](https://github.com/lobehub/lobe-chat/wiki/Plugins.zh-CN)
> 通过文档了解更多 [📘 插件使用](https://chat-docs.lobehub.com/zh/usage/plugins/basic)
<video controls src="https://github.com/lobehub/lobe-chat/assets/28616219/f29475a3-f346-4196-a435-41a6373ab9e2" muted="false"></video>
@ -638,6 +638,7 @@ This project is [MIT](./LICENSE) licensed.
[github-action-test-shield]: https://img.shields.io/github/actions/workflow/status/lobehub/lobe-chat/test.yml?label=test&labelColor=black&logo=githubactions&logoColor=white&style=flat-square
[github-contributors-link]: https://github.com/lobehub/lobe-chat/graphs/contributors
[github-contributors-shield]: https://img.shields.io/github/contributors/lobehub/lobe-chat?color=c4f042&labelColor=black&style=flat-square
[github-document-link]: https://chat-docs.lobehub.com/zh
[github-forks-link]: https://github.com/lobehub/lobe-chat/network/members
[github-forks-shield]: https://img.shields.io/github/forks/lobehub/lobe-chat?color=8ae8ff&labelColor=black&style=flat-square
[github-issues-link]: https://github.com/lobehub/lobe-chat/issues
@ -653,7 +654,6 @@ This project is [MIT](./LICENSE) licensed.
[github-stars-shield]: https://img.shields.io/github/stars/lobehub/lobe-chat?color=ffcb47&labelColor=black&style=flat-square
[github-trending-shield]: https://trendshift.io/api/badge/repositories/2256
[github-trending-url]: https://trendshift.io/repositories/2256
[github-wiki-link]: https://github.com/lobehub/lobe-chat/wiki
[issues-link]: https://img.shields.io/github/issues/lobehub/lobe-chat.svg?style=flat
[lobe-assets-github]: https://github.com/lobehub/lobe-assets
[lobe-chat-plugins]: https://github.com/lobehub/lobe-chat-plugins

View file

@ -0,0 +1,713 @@
# Complete Guide to LobeChat Feature Development
This document aims to guide developers on how to develop a complete feature requirement in LobeChat.
We will use the implementation of sessionGroup as an example: [✨ feat: add session group manager](https://github.com/lobehub/lobe-chat/pull/1055), and explain the complete implementation process through the following six main sections:
1. Data Model / Database Definition
2. Service Implementation / Model Implementation
3. Frontend Data Flow Store Implementation
4. UI Implementation and Action Binding
5. Data Migration
6. Data Import and Export
## 1. Database Section
To implement the Session Group feature, it is necessary to define the relevant data model and indexes at the database level.
Define a new sessionGroup table in 3 steps:
### 1. Establish Data Model Schema
Define the data model of `DB_SessionGroup` in `src/database/schema/sessionGroup.ts`:
```typescript
import { z } from 'zod';
export const DB_SessionGroupSchema = z.object({
name: z.string(),
sort: z.number().optional(),
});
export type DB_SessionGroup = z.infer<typeof DB_SessionGroupSchema>;
```
### 2. Create Database Indexes
Since a new table needs to be added, an index needs to be added to the database schema for the `sessionGroup` table.
Add `dbSchemaV4` in `src/database/core/schema.ts`:
```diff
// ... previous implementations
// ************************************** //
// ******* Version 3 - 2023-12-06 ******* //
// ************************************** //
// - Added `plugin` table
export const dbSchemaV3 = {
...dbSchemaV2,
plugins:
'&identifier, type, manifest.type, manifest.meta.title, manifest.meta.description, manifest.meta.author, createdAt, updatedAt',
};
+ // ************************************** //
+ // ******* Version 4 - 2024-01-21 ******* //
+ // ************************************** //
+ // - Added `sessionGroup` table
+ export const dbSchemaV4 = {
+ ...dbSchemaV3,
+ sessionGroups: '&id, name, sort, createdAt, updatedAt',
+ sessions: '&id, type, group, pinned, meta.title, meta.description, meta.tags, createdAt, updatedAt',
};
```
> \[!Note]
>
> In addition to `sessionGroups`, the definition of `sessions` has also been modified here due to data migration. However, as this section only focuses on schema definition and does not delve into the implementation of data migration, please refer to section five for details.
> \[!Important]
>
> If you are unfamiliar with the need to create indexes here and the syntax of schema definition, you may need to familiarize yourself with the basics of Dexie.js. You can refer to the [📘 Local Database](./Local-Database.zh-CN) section for relevant information.
### 3. Add the sessionGroups Table to the Local DB
Extend the local database class to include the new `sessionGroups` table:
```diff
import { dbSchemaV1, dbSchemaV2, dbSchemaV3, dbSchemaV4 } from './schemas';
interface LobeDBSchemaMap {
files: DB_File;
messages: DB_Message;
plugins: DB_Plugin;
+ sessionGroups: DB_SessionGroup;
sessions: DB_Session;
topics: DB_Topic;
}
// Define a local DB
export class LocalDB extends Dexie {
public files: LobeDBTable<'files'>;
public sessions: LobeDBTable<'sessions'>;
public messages: LobeDBTable<'messages'>;
public topics: LobeDBTable<'topics'>;
public plugins: LobeDBTable<'plugins'>;
+ public sessionGroups: LobeDBTable<'sessionGroups'>;
constructor() {
super(LOBE_CHAT_LOCAL_DB_NAME);
this.version(1).stores(dbSchemaV1);
this.version(2).stores(dbSchemaV2);
this.version(3).stores(dbSchemaV3);
+ this.version(4).stores(dbSchemaV4);
this.files = this.table('files');
this.sessions = this.table('sessions');
this.messages = this.table('messages');
this.topics = this.table('topics');
this.plugins = this.table('plugins');
+ this.sessionGroups = this.table('sessionGroups');
}
}
```
As a result, you can now view the `sessionGroups` table in the `LOBE_CHAT_DB` in `Application` -> `Storage` -> `IndexedDB`.
![](https://github.com/lobehub/lobe-chat/assets/28616219/aea50f66-4060-4a32-88c8-b3c672d05be8)
## 2. Model and Service Section
### Define Model
When building the LobeChat application, the Model is responsible for interacting with the database. It defines how to read, insert, update, and delete data from the database, as well as defining specific business logic.
In `src/database/model/sessionGroup.ts`, the `SessionGroupModel` is defined as follows:
```typescript
import { BaseModel } from '@/database/core';
import { DB_SessionGroup, DB_SessionGroupSchema } from '@/database/schemas/sessionGroup';
import { nanoid } from '@/utils/uuid';
class _SessionGroupModel extends BaseModel {
constructor() {
super('sessions', DB_SessionGroupSchema);
}
async create(name: string, sort?: number, id = nanoid()) {
return this._add({ name, sort }, id);
}
// ... Implementation of other CRUD methods
}
export const SessionGroupModel = new _SessionGroupModel();
```
### Service Implementation
In LobeChat, the Service layer is mainly responsible for communicating with the backend service, encapsulating business logic, and providing data to other layers in the frontend. `SessionService` is a service class specifically handling business logic related to sessions. It encapsulates operations such as creating sessions, querying sessions, and updating sessions.
To maintain code maintainability and extensibility, we place the logic related to session grouping in the `SessionService`. This helps to keep the business logic of the session domain cohesive. When business requirements increase or change, it becomes easier to modify and extend within this domain.
`SessionService` implements session group-related request logic by calling methods from `SessionGroupModel`. The following is the implementation of Session Group-related request logic in `sessionService`:
```typescript
class SessionService {
// ... Omitted session business logic
// ************************************** //
// *********** SessionGroup *********** //
// ************************************** //
async createSessionGroup(name: string, sort?: number) {
const item = await SessionGroupModel.create(name, sort);
if (!item) {
throw new Error('session group create Error');
}
return item.id;
}
// ... Other SessionGroup related implementations
}
```
## 3. Store Action Section
In the LobeChat application, the Store module is used to manage the frontend state of the application. The Actions within it are functions that trigger state updates, usually by calling methods in the service layer to perform actual data processing operations and then updating the state in the Store. We use `zustand` as the underlying dependency for the Store module. For a detailed practical introduction to state management, you can refer to [📘 Best Practices for State Management](../State-Management/State-Management-Intro.zh-CN.md).
### sessionGroup CRUD
CRUD operations for session groups are the core behaviors for managing session group data. In `src/store/session/slice/sessionGroup`, we will implement the state logic related to session groups, including adding, deleting, updating session groups, and their sorting.
The following are the methods of the `SessionGroupAction` interface that need to be implemented in the `action.ts` file:
```ts
export interface SessionGroupAction {
// Add session group
addSessionGroup: (name: string) => Promise<string>;
// Remove session group
removeSessionGroup: (id: string) => Promise<void>;
// Update session group ID for a session
updateSessionGroupId: (sessionId: string, groupId: string) => Promise<void>;
// Update session group name
updateSessionGroupName: (id: string, name: string) => Promise<void>;
// Update session group sorting
updateSessionGroupSort: (items: SessionGroupItem[]) => Promise<void>;
}
```
Taking the `addSessionGroup` method as an example, we first call the `createSessionGroup` method of `sessionService` to create a new session group, and then use the `refreshSessions` method to refresh the sessions state:
```ts
export const createSessionGroupSlice: StateCreator<
SessionStore,
[['zustand/devtools', never]],
[],
SessionGroupAction
> = (set, get) => ({
// Implement the logic for adding a session group
addSessionGroup: async (name) => {
// Call the createSessionGroup method in the service layer and pass in the session group name
const id = await sessionService.createSessionGroup(name);
// Call the get method to get the current Store state and execute the refreshSessions method to refresh the session data
await get().refreshSessions();
// Return the ID of the newly created session group
return id;
},
// ... Other action implementations
});
```
With the above implementation, we can ensure that after adding a new session group, the application's state will be updated in a timely manner, and the relevant components will receive the latest state and re-render. This approach improves the predictability and maintainability of the data flow, while also simplifying communication between components.
### Sessions Group Logic Refactoring
This requirement involves upgrading the Sessions feature to transform it from a single list to three different groups: `pinnedSessions` (pinned list), `customSessionGroups` (custom groups), and `defaultSessions` (default list).
To handle these groups, we need to refactor the implementation logic of `useFetchSessions`. Here are the key changes:
1. Use the `sessionService.getSessionsWithGroup` method to call the backend API and retrieve the grouped session data.
2. Save the retrieved data into three different state fields: `pinnedSessions`, `customSessionGroups`, and `defaultSessions`.
#### `useFetchSessions` Method
This method is defined in `createSessionSlice` as follows:
```typescript
export const createSessionSlice: StateCreator<
SessionStore,
[['zustand/devtools', never]],
[],
SessionAction
> = (set, get) => ({
// ... other methods
useFetchSessions: () =>
useSWR<ChatSessionList>(FETCH_SESSIONS_KEY, sessionService.getSessionsWithGroup, {
onSuccess: (data) => {
set(
{
customSessionGroups: data.customGroup,
defaultSessions: data.default,
isSessionsFirstFetchFinished: true,
pinnedSessions: data.pinned,
sessions: data.all,
},
false,
n('useFetchSessions/onSuccess', data),
);
},
}),
});
```
After successfully retrieving the data, we use the `set` method to update the `customSessionGroups`, `defaultSessions`, `pinnedSessions`, and `sessions` states. This ensures that the states are synchronized with the latest session data.
#### `sessionService.getSessionsWithGroup` Method
The `sessionService.getSessionsWithGroup` method is responsible for calling the backend API `SessionModel.queryWithGroups()`.
```typescript
class SessionService {
// ... other SessionGroup related implementations
async getSessionsWithGroup(): Promise<ChatSessionList> {
return SessionModel.queryWithGroups();
}
}
```
#### `SessionModel.queryWithGroups` Method
This method is the core method called by `sessionService.getSessionsWithGroup`, and it is responsible for querying and organizing session data. The code is as follows:
```typescript
class _SessionModel extends BaseModel {
// ... other methods
/**
* Query session data and categorize sessions based on groups.
* @returns {Promise<ChatSessionList>} An object containing all sessions and categorized session lists.
*/
async queryWithGroups(): Promise<ChatSessionList> {
// Query session group data
const groups = await SessionGroupModel.query();
// Query custom session groups based on session group IDs
const customGroups = await this.queryByGroupIds(groups.map((item) => item.id));
// Query default session list
const defaultItems = await this.querySessionsByGroupId(SessionDefaultGroup.Default);
// Query pinned sessions
const pinnedItems = await this.getPinnedSessions();
// Query all sessions
const all = await this.query();
// Combine and return all sessions and their group information
return {
all, // Array containing all sessions
customGroup: groups.map((group) => ({ ...group, children: customGroups[group.id] })), // Custom groups
default: defaultItems, // Default session list
pinned: pinnedItems, // Pinned session list
};
}
}
```
The `queryWithGroups` method first queries all session groups, then based on the IDs of these groups, it queries custom session groups, as well as default and pinned sessions. Finally, it returns an object containing all sessions and categorized session lists.
### Adjusting sessions selectors
Due to changes in the logic of grouping within sessions, we need to adjust the logic of the `sessions` selectors to ensure they can correctly handle the new data structure.
Original selectors:
```ts
// Default group
const defaultSessions = (s: SessionStore): LobeSessions => s.sessions;
// Pinned group
const pinnedSessionList = (s: SessionStore) =>
defaultSessions(s).filter((s) => s.group === SessionGroupDefaultKeys.Pinned);
// Unpinned group
const unpinnedSessionList = (s: SessionStore) =>
defaultSessions(s).filter((s) => s.group === SessionGroupDefaultKeys.Default);
```
Revised:
```ts
const defaultSessions = (s: SessionStore): LobeSessions => s.defaultSessions;
const pinnedSessions = (s: SessionStore): LobeSessions => s.pinnedSessions;
const customSessionGroups = (s: SessionStore): CustomSessionGroup[] => s.customSessionGroups;
```
Since all data retrieval in the UI is implemented using syntax like `useSessionStore(sessionSelectors.defaultSessions)`, we only need to modify the selector implementation of `defaultSessions` to complete the data structure change. The data retrieval code in the UI layer does not need to be changed at all, which can greatly reduce the cost and risk of refactoring.
> !\[Important]
>
> If you are not familiar with the concept and functionality of selectors, you can refer to the section [📘 Data Storage and Retrieval Module](./State-Management-Selectors.en-US) for relevant information.
## IV. UI Section
Bind Store Action in the UI component to implement interactive logic, for example `CreateGroupModal`:
```tsx
const CreateGroupModal = () => {
// ... Other logic
const [updateSessionGroup, addCustomGroup] = useSessionStore((s) => [
s.updateSessionGroupId,
s.addSessionGroup,
]);
return (
<Modal
onOk={async () => {
// ... Other logic
const groupId = await addCustomGroup(name);
await updateSessionGroup(sessionId, groupId);
}}
>
{/* ... */}
</Modal>
);
};
```
## 5. Data Migration
In the process of software development, data migration is an inevitable issue, especially when the existing data structure cannot meet the new business requirements. For this iteration of SessionGroup, we need to handle the migration of the `group` field in the `session`, which is a typical data migration case.
### Issues with the Old Data Structure
In the old data structure, the `group` field was used to mark whether the session was "pinned" or belonged to a "default" group. However, when support for multiple session groups is needed, the original data structure becomes inflexible.
For example:
```
before pin: group = abc
after pin: group = pinned
after unpin: group = default
```
From the above example, it can be seen that once a session is unpinned from the "pinned" state, the `group` field cannot be restored to its original `abc` value. This is because we do not have a separate field to maintain the pinned state. Therefore, we have introduced a new field `pinned` to indicate whether the session is pinned, while the `group` field will be used solely to identify the session group.
### Migration Strategy
The core logic of this migration is as follows:
- When the user's `group` field is `pinned`, set their `pinned` field to `true`, and set the group to `default`.
However, data migration in LobeChat typically involves two parts: **configuration file migration** and **database migration**. Therefore, the above logic will need to be implemented separately in these two areas.
#### Configuration File Migration
For configuration file migration, we recommend performing it before database migration, as configuration file migration is usually easier to test and validate. LobeChat's file migration configuration is located in the `src/migrations/index.ts` file, which defines the various versions of configuration file migration and their corresponding migration scripts.
```diff
// Current latest version number
- export const CURRENT_CONFIG_VERSION = 2;
+ export const CURRENT_CONFIG_VERSION = 3;
// Historical version upgrade module
const ConfigMigrations = [
+ /**
+ * 2024.01.22
+ * from `group = pinned` to `pinned:true`
+ */
+ MigrationV2ToV3,
/**
* 2023.11.27
* Migrate from single key database to dexie-based relational structure
*/
MigrationV1ToV2,
/**
* 2023.07.11
* just the first version, Nothing to do
*/
MigrationV0ToV1,
];
```
The logic for this configuration file migration is defined in `src/migrations/FromV2ToV3/index.ts`, simplified as follows:
```ts
export class MigrationV2ToV3 implements Migration {
// Specify the version from which to upgrade
version = 2;
migrate(data: MigrationData<V2ConfigState>): MigrationData<V3ConfigState> {
const { sessions } = data.state;
return {
...data,
state: {
...data.state,
sessions: sessions.map((s) => this.migrateSession(s)),
},
};
}
migrateSession = (session: V2Session): V3Session => {
return {
...session,
group: 'default',
pinned: session.group === 'pinned',
};
};
}
```
It can be seen that the migration implementation is very simple. However, it is important to ensure the correctness of the migration, so corresponding test cases need to be written in `src/migrations/FromV2ToV3/migrations.test.ts`:
```ts
import { MigrationData, VersionController } from '@/migrations/VersionController';
import { MigrationV1ToV2 } from '../FromV1ToV2';
import inputV1Data from '../FromV1ToV2/fixtures/input-v1-session.json';
import inputV2Data from './fixtures/input-v2-session.json';
import outputV3DataFromV1 from './fixtures/output-v3-from-v1.json';
import outputV3Data from './fixtures/output-v3.json';
import { MigrationV2ToV3 } from './index';
describe('MigrationV2ToV3', () => {
let migrations;
let versionController: VersionController<any>;
beforeEach(() => {
migrations = [MigrationV2ToV3];
versionController = new VersionController(migrations, 3);
});
it('should migrate data correctly through multiple versions', () => {
const data: MigrationData = inputV2Data;
const migratedData = versionController.migrate(data);
expect(migratedData.version).toEqual(outputV3Data.version);
expect(migratedData.state.sessions).toEqual(outputV3Data.state.sessions);
expect(migratedData.state.topics).toEqual(outputV3Data.state.topics);
expect(migratedData.state.messages).toEqual(outputV3Data.state.messages);
});
it('should work correct from v1 to v3', () => {
const data: MigrationData = inputV1Data;
versionController = new VersionController([MigrationV2ToV3, MigrationV1ToV2], 3);
const migratedData = versionController.migrate(data);
expect(migratedData.version).toEqual(outputV3DataFromV1.version);
expect(migratedData.state.sessions).toEqual(outputV3DataFromV1.state.sessions);
expect(migratedData.state.topics).toEqual(outputV3DataFromV1.state.topics);
expect(migratedData.state.messages).toEqual(outputV3DataFromV1.state.messages);
});
});
```
```markdown
```
Unit tests require the use of `fixtures` to fix the test data. The test cases include verification logic for two parts: 1) the correctness of a single migration (v2 -> v3) and 2) the correctness of a complete migration (v1 -> v3).
> \[!Important]
>
> The version number in the configuration file may not match the database version number, as database version updates do not always involve changes to the data structure (such as adding tables or fields), while configuration file version updates usually involve data migration.
````
#### Database Migration
Database migration needs to be implemented in the `LocalDB` class, which is defined in the `src/database/core/db.ts` file. The migration process involves adding a new `pinned` field for each record in the `sessions` table and resetting the `group` field:
```diff
export class LocalDB extends Dexie {
public files: LobeDBTable<'files'>;
public sessions: LobeDBTable<'sessions'>;
public messages: LobeDBTable<'messages'>;
public topics: LobeDBTable<'topics'>;
public plugins: LobeDBTable<'plugins'>;
public sessionGroups: LobeDBTable<'sessionGroups'>;
constructor() {
super(LOBE_CHAT_LOCAL_DB_NAME);
this.version(1).stores(dbSchemaV1);
this.version(2).stores(dbSchemaV2);
this.version(3).stores(dbSchemaV3);
this.version(4)
.stores(dbSchemaV4)
+ .upgrade((trans) => this.upgradeToV4(trans));
this.files = this.table('files');
this.sessions = this.table('sessions');
this.messages = this.table('messages');
this.topics = this.table('topics');
this.plugins = this.table('plugins');
this.sessionGroups = this.table('sessionGroups');
}
+ /**
+ * 2024.01.22
+ *
+ * DB V3 to V4
+ * from `group = pinned` to `pinned:true`
+ */
+ upgradeToV4 = async (trans: Transaction) => {
+ const sessions = trans.table('sessions');
+ await sessions.toCollection().modify((session) => {
+ // translate boolean to number
+ session.pinned = session.group === 'pinned' ? 1 : 0;
session.group = 'default';
});
+ };
}
````
This is our data migration strategy. When performing the migration, it is essential to ensure the correctness of the migration script and validate the migration results through thorough testing.
## VI. Data Import and Export
In LobeChat, the data import and export feature is designed to ensure that users can migrate their data between different devices. This includes session, topic, message, and settings data. In the implementation of the Session Group feature, we also need to handle data import and export to ensure that the complete exported data can be restored exactly the same on other devices.
The core implementation of data import and export is in the `ConfigService` in `src/service/config.ts`, with key methods as follows:
| Method Name | Description |
| --------------------- | -------------------------- |
| `importConfigState` | Import configuration data |
| `exportAgents` | Export all agent data |
| `exportSessions` | Export all session data |
| `exportSingleSession` | Export single session data |
| `exportSingleAgent` | Export single agent data |
| `exportSettings` | Export settings data |
| `exportAll` | Export all data |
### Data Export
In LobeChat, when a user chooses to export data, the current session, topic, message, and settings data are packaged into a JSON file and provided for download. The standard structure of this JSON file is as follows:
```json
{
"exportType": "sessions",
"state": {
"sessions": [],
"topics": [],
"messages": []
},
"version": 3
}
```
Where:
- `exportType`: Identifies the type of data being exported, currently including `sessions`, `agent`, `settings`, and `all`.
- `state`: Stores the actual data, with different data types for different `exportType`.
- `version`: Indicates the data version.
In the implementation of the Session Group feature, we need to add `sessionGroups` data to the `state` field. This way, when users export data, their Session Group data will also be included.
For example, when exporting sessions, the relevant implementation code modification is as follows:
```diff
class ConfigService {
// ... Other code omitted
exportSessions = async () => {
const sessions = await sessionService.getSessions();
+ const sessionGroups = await sessionService.getSessionGroups();
const messages = await messageService.getAllMessages();
const topics = await topicService.getAllTopics();
- const config = createConfigFile('sessions', { messages, sessions, topics });
+ const config = createConfigFile('sessions', { messages, sessionGroups, sessions, topics });
exportConfigFile(config, 'sessions');
};
}
```
### Data Import
The data import functionality is implemented through `ConfigService.importConfigState`. When users choose to import data, they need to provide a JSON file that conforms to the above structure specification. The `importConfigState` method accepts the data of the configuration file and imports it into the application.
In the implementation of the Session Group feature, we need to handle the `sessionGroups` data during the data import process. This way, when users import data, their Session Group data will also be imported correctly.
The following is the modified code for the import implementation in `importConfigState`:
```diff
class ConfigService {
// ... Other code omitted
+ importSessionGroups = async (sessionGroups: SessionGroupItem[]) => {
+ return sessionService.batchCreateSessionGroups(sessionGroups);
+ };
importConfigState = async (config: ConfigFile): Promise<ImportResults | undefined> => {
switch (config.exportType) {
case 'settings': {
await this.importSettings(config.state.settings);
break;
}
case 'agents': {
+ const sessionGroups = await this.importSessionGroups(config.state.sessionGroups);
const data = await this.importSessions(config.state.sessions);
return {
+ sessionGroups: this.mapImportResult(sessionGroups),
sessions: this.mapImportResult(data),
};
}
case 'all': {
await this.importSettings(config.state.settings);
+ const sessionGroups = await this.importSessionGroups(config.state.sessionGroups);
const [sessions, messages, topics] = await Promise.all([
this.importSessions(config.state.sessions),
this.importMessages(config.state.messages),
this.importTopics(config.state.topics),
]);
return {
messages: this.mapImportResult(messages),
+ sessionGroups: this.mapImportResult(sessionGroups),
sessions: this.mapImportResult(sessions),
topics: this.mapImportResult(topics),
};
}
case 'sessions': {
+ const sessionGroups = await this.importSessionGroups(config.state.sessionGroups);
const [sessions, messages, topics] = await Promise.all([
this.importSessions(config.state.sessions),
this.importMessages(config.state.messages),
this.importTopics(config.state.topics),
]);
return {
messages: this.mapImportResult(messages),
+ sessionGroups: this.mapImportResult(sessionGroups),
sessions: this.mapImportResult(sessions),
topics: this.mapImportResult(topics),
};
}
}
};
}
```
One key point of the above modification is to import sessionGroup first, because if sessions are imported first and the corresponding SessionGroup Id is not found in the current database, the group of this session will default to be modified to the default value. This will prevent the correct association of the sessionGroup's ID with the session.
This is the implementation of the LobeChat Session Group feature in the data import and export process. This approach ensures that users' Session Group data is correctly handled during the import and export process.
## Summary
The above is the complete implementation process of the LobeChat Session Group feature. Developers can refer to this document for the development and testing of related functionalities.

View file

@ -178,7 +178,7 @@ class SessionService {
## 三、Store Action 部分
在 LobeChat 应用中Store 是用于管理应用前端状态的模块。其中的 Action 是触发状态更新的函数,通常会调用服务层的方法来执行实际的数据处理操作,然后更新 Store 中的状态。我们采用了 `zustand` 作为 Store 模块的底层依赖,对于状态管理的详细实践介绍,可以查阅 [📘 状态管理最佳实践](./State-Management-Intro.zh-CN)
在 LobeChat 应用中Store 是用于管理应用前端状态的模块。其中的 Action 是触发状态更新的函数,通常会调用服务层的方法来执行实际的数据处理操作,然后更新 Store 中的状态。我们采用了 `zustand` 作为 Store 模块的底层依赖,对于状态管理的详细实践介绍,可以查阅 [📘 状态管理最佳实践](../State-Management/State-Management-Intro.zh-CN.md)
### sessionGroup CRUD
@ -349,7 +349,7 @@ const customSessionGroups = (s: SessionStore): CustomSessionGroup[] => s.customS
> !\[Important]
>
> 如果你对 Selectors 的概念和功能不太了解,可以查阅 [📘 数据存储取数模块](./State-Management-Selectors.zh-CN) 部分了解相关内容。
> 如果你对 Selectors 的概念和功能不太了解,可以查阅 [📘 数据存储取数模块](../State-Management/State-Management-Selectors.zh-CN.md) 部分了解相关内容。
## 四、UI 部分

View file

@ -44,7 +44,7 @@ src
└── utils # 通用的工具函数
```
有关目录架构的详细介绍,详见: [文件夹目录架构](Folder-Structure.zh-CN)
有关目录架构的详细介绍,详见: [文件夹目录架构](Folder-Structure.zh-CN.md)
## 本地开发环境设置
@ -76,7 +76,7 @@ bun run dev
# 访问 http://localhost:3010 查看应用
```
现在,你应该可以在浏览器中看到 LobeChat 的欢迎页面。详细的环境配置指南,请参考 [开发环境设置指南](Setup-Development.zh-CN)。
现在,你应该可以在浏览器中看到 LobeChat 的欢迎页面。详细的环境配置指南,请参考 [开发环境设置指南](Setup-Development.zh-CN.md)。
## 代码风格与贡献指南
@ -87,7 +87,7 @@ bun run dev
所有的贡献都将经过代码审查。维护者可能会提出修改建议或要求。请积极响应审查意见,并及时做出调整,我们期待你的参与和贡献。
详细的代码风格和贡献指南,请参考 [代码风格与贡献指南](Contributing-Guidelines.zh-CN)。
详细的代码风格和贡献指南,请参考 [代码风格与贡献指南](Contributing-Guidelines.zh-CN.md)。
## 国际化实现指南
@ -95,9 +95,9 @@ LobeChat 采用 `i18next` 和 `lobe-i18n` 实现多语言支持,确保用户
国际化文件位于 `src/locales`,包含默认语言(中文)。 我们会通过 `lobe-i18n` 自动生成其他的语言 JSON 文件。
如果要添加新语种,需遵循特定步骤,详见 [新语种添加指南](Add-New-Locale.zh-CN)。 我们鼓励你参与我们的国际化努力,共同为全球用户提供更好的服务。
如果要添加新语种,需遵循特定步骤,详见 [新语种添加指南](../Internationalization/Add-New-Locale.zh-CN.md)。 我们鼓励你参与我们的国际化努力,共同为全球用户提供更好的服务。
详细的国际化实现指南指南,请参考 [国际化实现指南](Internationalization-Implementation.zh-CN)。
详细的国际化实现指南指南,请参考 [国际化实现指南](../Internationalization/Internationalization-Implementation.zh-CN.md)。
## 附录:资源与参考

View file

@ -6,7 +6,9 @@
<h1>Lobe Chat Wiki</h1>
LobeChat is a open-source, extensible ([Function Calling][fc-url]), high-performance chatbot framework. <br/> It supports one-click free deployment of your private ChatGPT/LLM web application.
LobeChat is an open-source, extensible ([Function Calling][fc-url]), high-performance chatbot framework. <br/> It supports one-click free deployment of your private ChatGPT/LLM web application.
[Usage Documents](https://chat-docs.lobehub.com/en) | [使用指南](https://chat-docs.lobehub.com/zh)
</div>
@ -14,43 +16,33 @@ LobeChat is a open-source, extensible ([Function Calling][fc-url]), high-perform
<!-- DOCS LIST -->
### 🤯 Usage
### 🤯 Basic
- [Custom Agents Guide](https://github.com/lobehub/lobe-chat/wiki/Usage-Agents) | [自定义助手指南](https://github.com/lobehub/lobe-chat/wiki/Usage-Agents.zh-CN)
- [Plugin Usage](https://github.com/lobehub/lobe-chat/wiki/Plugins) | [插件使用](https://github.com/lobehub/lobe-chat/wiki/Plugins.zh-CN)
- [Topic Guide](https://github.com/lobehub/lobe-chat/wiki/Usage-Topics) | [话题指南](https://github.com/lobehub/lobe-chat/wiki/Usage-Topics.zh-CN)
<br/>
### 🛳 Self-Hosting
- [Docker Deployment Guide](https://github.com/lobehub/lobe-chat/wiki/Docker-Deployment) | [Docker 部署指引](https://github.com/lobehub/lobe-chat/wiki/Docker-Deployment.zh-CN)
- [Deploying with Azure OpenAI](https://github.com/lobehub/lobe-chat/wiki/Deploy-with-Azure-OpenAI) | [使用 Azure OpenAI 部署](https://github.com/lobehub/lobe-chat/wiki/Deploy-with-Azure-OpenAI.zh-CN)
- [Environment Variables](https://github.com/lobehub/lobe-chat/wiki/Environment-Variable) | [环境变量](https://github.com/lobehub/lobe-chat/wiki/Environment-Variable.zh-CN)
- [Authentication Service](https://github.com/lobehub/lobe-chat/wiki/Authentication) | [身份验证服务](https://github.com/lobehub/lobe-chat/wiki/Authentication.zh-CN)
- [Upstream Sync](https://github.com/lobehub/lobe-chat/wiki/Upstream-Sync) | [自部署保持更新](https://github.com/lobehub/lobe-chat/wiki/Upstream-Sync.zh-CN)
- [Frequently Asked Questions](https://github.com/lobehub/lobe-chat/wiki/Common-Error) | [常见问题](https://github.com/lobehub/lobe-chat/wiki/Common-Error.zh-CN)
- [Data Statistics](https://github.com/lobehub/lobe-chat/wiki/Analytics) | [数据统计](https://github.com/lobehub/lobe-chat/wiki/Analytics.zh-CN)
<br/>
### ⌨️ Development
- [Technical Development Getting Started Guide](https://github.com/lobehub/lobe-chat/wiki/index) | [技术开发上手指南](https://github.com/lobehub/lobe-chat/wiki/index.zh-CN)
- [Code Style and Contribution Guidelines](https://github.com/lobehub/lobe-chat/wiki/Contributing-Guidelines) | [代码风格与贡献指南](https://github.com/lobehub/lobe-chat/wiki/Contributing-Guidelines.zh-CN)
- [Environment Setup Guide](https://github.com/lobehub/lobe-chat/wiki/Setup-Development) | [环境设置指南](https://github.com/lobehub/lobe-chat/wiki/Setup-Development.zh-CN)
- [Architecture Design](https://github.com/lobehub/lobe-chat/wiki/Architecture) | [架构设计](https://github.com/lobehub/lobe-chat/wiki/Architecture.zh-CN)
- [Directory Structure](https://github.com/lobehub/lobe-chat/wiki/Folder-Structure) | [目录架构](https://github.com/lobehub/lobe-chat/wiki/Folder-Structure.zh-CN)
- [Best Practices for State Management](https://github.com/lobehub/lobe-chat/wiki/State-Management-Intro) | [状态管理最佳实践](https://github.com/lobehub/lobe-chat/wiki/State-Management-Intro.zh-CN)
- [Data Store Selector](https://github.com/lobehub/lobe-chat/wiki/State-Management-Selectors) | [数据存储取数模块](https://github.com/lobehub/lobe-chat/wiki/State-Management-Selectors.zh-CN)
- [Code Style and Contribution Guidelines](https://github.com/lobehub/lobe-chat/wiki/Contributing-Guidelines) | [代码风格与贡献指南](https://github.com/lobehub/lobe-chat/wiki/Contributing-Guidelines.zh-CN)
- [Complete Guide to LobeChat Feature Development](https://github.com/lobehub/lobe-chat/wiki/Feature-Development) | [LobeChat 功能开发完全指南](https://github.com/lobehub/lobe-chat/wiki/Feature-Development.zh-CN)
- [Conversation API Implementation Logic](https://github.com/lobehub/lobe-chat/wiki/Chat-API) | [会话 API 实现逻辑](https://github.com/lobehub/lobe-chat/wiki/Chat-API.zh-CN)
- [How to Develop a New Feature](https://github.com/lobehub/lobe-chat/wiki/Feature-Development) | [如何开发一个新功能](https://github.com/lobehub/lobe-chat/wiki/Feature-Development.zh-CN)
- [Frontend](https://github.com/lobehub/lobe-chat/wiki/Feature-Development-Frontend) | [前端实现](https://github.com/lobehub/lobe-chat/wiki/Feature-Development-Frontend.zh-CN)
- [Directory Structure](https://github.com/lobehub/lobe-chat/wiki/Folder-Structure) | [目录架构](https://github.com/lobehub/lobe-chat/wiki/Folder-Structure.zh-CN)
- [Environment Setup Guide](https://github.com/lobehub/lobe-chat/wiki/Setup-Development) | [环境设置指南](https://github.com/lobehub/lobe-chat/wiki/Setup-Development.zh-CN)
- [How to Develop a New Feature](https://github.com/lobehub/lobe-chat/wiki/Feature-Development-Frontend) | [如何开发一个新功能:前端实现](https://github.com/lobehub/lobe-chat/wiki/Feature-Development-Frontend.zh-CN)
- [New Authentication Provider Guide](https://github.com/lobehub/lobe-chat/wiki/Add-New-Authentication-Providers) | [新身份验证方式开发指南](https://github.com/lobehub/lobe-chat/wiki/Add-New-Authentication-Providers.zh-CN)
- [Resources and References](https://github.com/lobehub/lobe-chat/wiki/Resources) | [资源与参考](https://github.com/lobehub/lobe-chat/wiki/Resources.zh-CN)
- [Technical Development Getting Started Guide](https://github.com/lobehub/lobe-chat/wiki/Intro) | [技术开发上手指南](https://github.com/lobehub/lobe-chat/wiki/Intro.zh-CN)
- [Testing Guide](https://github.com/lobehub/lobe-chat/wiki/Test) | [测试指南](https://github.com/lobehub/lobe-chat/wiki/Test.zh-CN)
<br/>
### 🌎 Internationalization
- [Internationalization Implementation Guide](https://github.com/lobehub/lobe-chat/wiki/Internationalization-Implementation) | [国际化实现指南](https://github.com/lobehub/lobe-chat/wiki/Internationalization-Implementation.zh-CN)
- [New Locale Guide](https://github.com/lobehub/lobe-chat/wiki/Add-New-Locale) | [新语种添加指南](https://github.com/lobehub/lobe-chat/wiki/Add-New-Locale.zh-CN)
- [New Authentication Provider Guide](https://github.com/lobehub/lobe-chat/wiki/Add-New-Authentication-Providers) | [新身份验证方式开发指南](https://github.com/lobehub/lobe-chat/wiki/Add-New-Authentication-Providers.zh-CN)
- [Testing Guide](https://github.com/lobehub/lobe-chat/wiki/Test) | [测试指南](https://github.com/lobehub/lobe-chat/wiki/Test.zh-CN)
- [Resources and References](https://github.com/lobehub/lobe-chat/wiki/Resources) | [资源与参考](https://github.com/lobehub/lobe-chat/wiki/Resources.zh-CN)
<br/>
### ⌨️ State Management
- [Best Practices for State Management](https://github.com/lobehub/lobe-chat/wiki/State-Management-Intro) | [状态管理最佳实践](https://github.com/lobehub/lobe-chat/wiki/State-Management-Intro.zh-CN)
- [Data Store Selector](https://github.com/lobehub/lobe-chat/wiki/State-Management-Selectors) | [数据存储取数模块](https://github.com/lobehub/lobe-chat/wiki/State-Management-Selectors.zh-CN)
<br/>
@ -62,7 +54,6 @@ LobeChat is a open-source, extensible ([Function Calling][fc-url]), high-perform
### 🧩 Plugins
- [Plugin Development Guide](https://github.com/lobehub/lobe-chat/wiki/Plugin-Development) | [插件开发指南](https://github.com/lobehub/lobe-chat/wiki/Plugin-Development.zh-CN)
- [Plugin Index and Submit](https://github.com/lobehub/lobe-chat-plugins) | [插件索引与提交](https://github.com/lobehub/lobe-chat-plugins/blob/main/README.zh-CN.md)
- [Plugin SDK Docs](https://chat-plugin-sdk.lobehub.com) | [插件 SDK 文档](https://chat-plugin-sdk.lobehub.com)

View file

@ -115,7 +115,7 @@ const createI18nInstance = (lang) => {
- [🌐 feat(locale): Add fr-FR (#637) #645](https://github.com/lobehub/lobe-chat/pull/645)
- [🌐 Add russian localy #137](https://github.com/lobehub/lobe-chat/pull/137)
要添加新的语种支持, 详细步骤请参考:[新语种添加指南](Add-New-Locale.zh-CN)。
要添加新的语种支持, 详细步骤请参考:[新语种添加指南](Add-New-Locale.zh-CN.md)。
## 资源和进一步阅读

View file

@ -2,43 +2,33 @@
#### 🏠 Home
- [TOC](Home) | [目录](Home)
- [TOC](Home.md) | [目录](Home.md)
<!-- DOCS LIST -->
#### 🤯 Usage
#### 🤯 Basic
- [Custom Agents Guide](https://github.com/lobehub/lobe-chat/wiki/Usage-Agents) | [自定义助手指南](https://github.com/lobehub/lobe-chat/wiki/Usage-Agents.zh-CN)
- [Plugin Usage](https://github.com/lobehub/lobe-chat/wiki/Plugins) | [插件使用](https://github.com/lobehub/lobe-chat/wiki/Plugins.zh-CN)
- [Topic Guide](https://github.com/lobehub/lobe-chat/wiki/Usage-Topics) | [话题指南](https://github.com/lobehub/lobe-chat/wiki/Usage-Topics.zh-CN)
#### 🛳 Self-Hosting
- [Docker Deployment Guide](https://github.com/lobehub/lobe-chat/wiki/Docker-Deployment) | [Docker 部署指引](https://github.com/lobehub/lobe-chat/wiki/Docker-Deployment.zh-CN)
- [Deploying with Azure OpenAI](https://github.com/lobehub/lobe-chat/wiki/Deploy-with-Azure-OpenAI) | [使用 Azure OpenAI 部署](https://github.com/lobehub/lobe-chat/wiki/Deploy-with-Azure-OpenAI.zh-CN)
- [Environment Variables](https://github.com/lobehub/lobe-chat/wiki/Environment-Variable) | [环境变量](https://github.com/lobehub/lobe-chat/wiki/Environment-Variable.zh-CN)
- [Authentication Service](https://github.com/lobehub/lobe-chat/wiki/Authentication) | [身份验证服务](https://github.com/lobehub/lobe-chat/wiki/Authentication.zh-CN)
- [Upstream Sync](https://github.com/lobehub/lobe-chat/wiki/Upstream-Sync) | [自部署保持更新](https://github.com/lobehub/lobe-chat/wiki/Upstream-Sync.zh-CN)
- [Frequently Asked Questions](https://github.com/lobehub/lobe-chat/wiki/Common-Error) | [常见问题](https://github.com/lobehub/lobe-chat/wiki/Common-Error.zh-CN)
- [Data Statistics](https://github.com/lobehub/lobe-chat/wiki/Analytics) | [数据统计](https://github.com/lobehub/lobe-chat/wiki/Analytics.zh-CN)
#### ⌨️ Development
- [Technical Development Getting Started Guide](https://github.com/lobehub/lobe-chat/wiki/Intro) | [技术开发上手指南](https://github.com/lobehub/lobe-chat/wiki/Intro.zh-CN)
- [Code Style and Contribution Guidelines](https://github.com/lobehub/lobe-chat/wiki/Contributing-Guidelines) | [代码风格与贡献指南](https://github.com/lobehub/lobe-chat/wiki/Contributing-Guidelines.zh-CN)
- [Environment Setup Guide](https://github.com/lobehub/lobe-chat/wiki/Setup-Development) | [环境设置指南](https://github.com/lobehub/lobe-chat/wiki/Setup-Development.zh-CN)
- [Architecture Design](https://github.com/lobehub/lobe-chat/wiki/Architecture) | [架构设计](https://github.com/lobehub/lobe-chat/wiki/Architecture.zh-CN)
- [Directory Structure](https://github.com/lobehub/lobe-chat/wiki/Folder-Structure) | [目录架构](https://github.com/lobehub/lobe-chat/wiki/Folder-Structure.zh-CN)
- [Best Practices for State Management](https://github.com/lobehub/lobe-chat/wiki/State-Management-Intro) | [状态管理最佳实践](https://github.com/lobehub/lobe-chat/wiki/State-Management-Intro.zh-CN)
- [Data Store Selector](https://github.com/lobehub/lobe-chat/wiki/State-Management-Selectors) | [数据存储取数模块](https://github.com/lobehub/lobe-chat/wiki/State-Management-Selectors.zh-CN)
- [Code Style and Contribution Guidelines](https://github.com/lobehub/lobe-chat/wiki/Contributing-Guidelines) | [代码风格与贡献指南](https://github.com/lobehub/lobe-chat/wiki/Contributing-Guidelines.zh-CN)
- [Complete Guide to LobeChat Feature Development](https://github.com/lobehub/lobe-chat/wiki/Feature-Development) | [LobeChat 功能开发完全指南](https://github.com/lobehub/lobe-chat/wiki/Feature-Development.zh-CN)
- [Conversation API Implementation Logic](https://github.com/lobehub/lobe-chat/wiki/Chat-API) | [会话 API 实现逻辑](https://github.com/lobehub/lobe-chat/wiki/Chat-API.zh-CN)
- [How to Develop a New Feature](https://github.com/lobehub/lobe-chat/wiki/Feature-Development) | [如何开发一个新功能](https://github.com/lobehub/lobe-chat/wiki/Feature-Development.zh-CN)
- [Frontend](https://github.com/lobehub/lobe-chat/wiki/Feature-Development-Frontend) | [前端实现](https://github.com/lobehub/lobe-chat/wiki/Feature-Development-Frontend.zh-CN)
- [Directory Structure](https://github.com/lobehub/lobe-chat/wiki/Folder-Structure) | [目录架构](https://github.com/lobehub/lobe-chat/wiki/Folder-Structure.zh-CN)
- [Environment Setup Guide](https://github.com/lobehub/lobe-chat/wiki/Setup-Development) | [环境设置指南](https://github.com/lobehub/lobe-chat/wiki/Setup-Development.zh-CN)
- [How to Develop a New Feature](https://github.com/lobehub/lobe-chat/wiki/Feature-Development-Frontend) | [如何开发一个新功能:前端实现](https://github.com/lobehub/lobe-chat/wiki/Feature-Development-Frontend.zh-CN)
- [New Authentication Provider Guide](https://github.com/lobehub/lobe-chat/wiki/Add-New-Authentication-Providers) | [新身份验证方式开发指南](https://github.com/lobehub/lobe-chat/wiki/Add-New-Authentication-Providers.zh-CN)
- [Resources and References](https://github.com/lobehub/lobe-chat/wiki/Resources) | [资源与参考](https://github.com/lobehub/lobe-chat/wiki/Resources.zh-CN)
- [Technical Development Getting Started Guide](https://github.com/lobehub/lobe-chat/wiki/Intro) | [技术开发上手指南](https://github.com/lobehub/lobe-chat/wiki/Intro.zh-CN)
- [Testing Guide](https://github.com/lobehub/lobe-chat/wiki/Test) | [测试指南](https://github.com/lobehub/lobe-chat/wiki/Test.zh-CN)
#### 🌎 Internationalization
- [Internationalization Implementation Guide](https://github.com/lobehub/lobe-chat/wiki/Internationalization-Implementation) | [国际化实现指南](https://github.com/lobehub/lobe-chat/wiki/Internationalization-Implementation.zh-CN)
- [New Locale Guide](https://github.com/lobehub/lobe-chat/wiki/Add-New-Locale) | [新语种添加指南](https://github.com/lobehub/lobe-chat/wiki/Add-New-Locale.zh-CN)
- [New Authentication Provider Guide](https://github.com/lobehub/lobe-chat/wiki/Add-New-Authentication-Providers) | [新身份验证方式开发指南](https://github.com/lobehub/lobe-chat/wiki/Add-New-Authentication-Providers.zh-CN)
- [Testing Guide](https://github.com/lobehub/lobe-chat/wiki/Test) | [测试指南](https://github.com/lobehub/lobe-chat/wiki/Test.zh-CN)
- [Resources and References](https://github.com/lobehub/lobe-chat/wiki/Resources) | [资源与参考](https://github.com/lobehub/lobe-chat/wiki/Resources.zh-CN)
#### ⌨️ State Management
- [Best Practices for State Management](https://github.com/lobehub/lobe-chat/wiki/State-Management-Intro) | [状态管理最佳实践](https://github.com/lobehub/lobe-chat/wiki/State-Management-Intro.zh-CN)
- [Data Store Selector](https://github.com/lobehub/lobe-chat/wiki/State-Management-Selectors) | [数据存储取数模块](https://github.com/lobehub/lobe-chat/wiki/State-Management-Selectors.zh-CN)
#### 🤖 Agents
@ -46,7 +36,6 @@
#### 🧩 Plugins
- [Plugin Development Guide](https://github.com/lobehub/lobe-chat/wiki/Plugin-Development) | [插件开发指南](https://github.com/lobehub/lobe-chat/wiki/Plugin-Development.zh-CN)
- [Plugin Index and Submit](https://github.com/lobehub/lobe-chat-plugins) | [插件索引与提交](https://github.com/lobehub/lobe-chat-plugins/blob/main/README.zh-CN.md)
- [Plugin SDK Docs](https://chat-plugin-sdk.lobehub.com) | [插件 SDK 文档](https://chat-plugin-sdk.lobehub.com)

View file

@ -1,20 +0,0 @@
# Data Statistics
To better analyze the usage of LobeChat users, we have integrated several free/open-source data statistics services in LobeChat for collecting user usage data, which you can enable as needed.
#### TOC
- [Vercel Analytics](#vercel-analytics)
- [🚧 Posthog](#-posthog)
## Vercel Analytics
[Vercel Analytics](https://vercel.com/analytics) is a data analysis service launched by Vercel, which can help you collect website visit information, including traffic, sources, and devices used for access.
We have integrated Vercel Analytics into the code, and you can enable it by setting the environment variable `NEXT_PUBLIC_ANALYTICS_VERCEL=1`, and then open the Analytics tab in the Vercel deployment project to view your application's visit information.
Vercel Analytics provides 2500 free Web Analytics Events per month (which can be understood as PV), which is generally sufficient for personal deployment and self-use products.
If you need detailed instructions on using Vercel Analytics, please refer to [Vercel Web Analytics Quick Start](https://vercel.com/docs/analytics/quickstart).
## 🚧 Posthog

View file

@ -1,80 +0,0 @@
# Authentication Service
LobeChat supports configuring external identity verification services for internal use within enterprises/organizations, facilitating centralized management of user authorizations. Currently, [Auth0][auth0-client-page] is supported. This article will guide you through the process of setting up the identity verification service.
### TOC
- [Creating an Auth0 Application](#creating-an-auth0-application)
- [Adding Users](#adding-users)
- [Configuring Environment Variables](#configuring-environment-variables)
- [Advanced - Connecting to an Existing Single Sign-On Service](#advanced---connecting-to-an-existing-single-sign-on-service)
- [Advanced - Configuring Social Login](#advanced---configuring-social-login)
## Creating an Auth0 Application
To begin, register and log in to [Auth0][auth0-client-page]. Then, navigate to the **_Applications_** section in the left sidebar to access the application management interface. Click on **_Create Application_** in the top right corner to initiate the application creation process.
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/1b405347-f4c3-4c55-82f6-47116f2210d0)
Next, fill in the desired application name to be displayed to organization users. You can choose any application type, then click on **_Create_**.
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/75c92f85-3ad3-4473-a9c6-e667e28d428d)
Once the application is successfully created, click on the respective application to access its details page. Switch to the **_Settings_** tab to view the corresponding configuration information.
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/a1ed996b-95ef-4b7d-a50d-b4666eccfecb)
On the application configuration page, you also need to configure the **_Allowed Callback URLs_** to be `http(s)://<your-domain>/api/auth/callback/auth0`
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/575f46aa-f485-49bd-8b90-dbb1ce1a5c1b)
> \[!NOTE]
>
> You can fill in or modify the Allowed Callback URLs after deployment, but make sure the URLs are consistent with the deployed URLs!
## Adding Users
Navigate to the **_Users Management_** section in the left sidebar to access the user management interface. You can create new users for your organization to log in to LobeChat.
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/3b8127ab-dc4f-4ff9-a4cb-dec3ef0295cc)
## Configuring Environment Variables
When deploying LobeChat, you need to configure the following environment variables:
| Environment Variable | Required | Description | Default Value | Example |
| --------------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------- | ------------- | ------------------------------------------------------------------ |
| `ENABLE_OAUTH_SSO` | Yes | Enable single sign-on (SSO) for LobeChat. Set this value to `1` to enable single sign-on. | - | `1` |
| `NEXTAUTH_SECRET` | Yes | The key used to encrypt the session token in Auth.js. You can generate a key using the following command: `openssl rand -base64 32` | - | `Tfhi2t2pelSMEA8eaV61KaqPNEndFFdMIxDaJnS1CUI=` |
| `AUTH0_CLIENT_ID` | Yes | Client ID of the Auth0 application | - | `evCnOJP1UX8FMnXR9Xkj5t0NyFn5p70P` |
| `AUTH0_CLIENT_SECRET` | Yes | Client Secret of the Auth0 application | - | `wnX7UbZg85ZUzF6ioxPLnJVEQa1Elbs7aqBUSF16xleBS5AdkVfASS49-fQIC8Rm` |
| `AUTH0_ISSUER` | Yes | Domain of the Auth0 application | - | `https://example.auth0.com` |
| `ACCESS_CODE` | Yes | Add a password to access this service. You can set a long random password to "disable" accessed by the code | - | `awCT74` or `e3@09!` or `code1,code2,code3` |
> \[!NOTE]
>
> After successful deployment, users will be able to authenticate and use LobeChat using the users configured in Auth0.
## Advanced - Connecting to an Existing Single Sign-On Service
If your enterprise or organization already has an existing unified identity verification infrastructure, you can connect to an existing single sign-on service in **_Applications -> SSO Integrations_**.
Auth0 supports single sign-on services such as Azure Active Directory, Slack, Google Workspace, Office 365, and Zoom. For a detailed list of supported services, refer to [this page][auth0-sso-integrations].
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/32650f4f-d0b0-4843-b26d-d35bad11d8a3)
## Advanced - Configuring Social Login
If your enterprise or organization needs to support external personnel login, you can configure social login services in **_Authentication -> Social_**.
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/7b6f6a6c-2686-49d8-9dbd-0516053f1efa)
> \[!NOTE]
>
> Configuring social login services will allow anyone to authenticate by default, which may lead to abuse of LobeChat by external personnel. If you need to restrict login personnel, be sure to configure a blocking policy.
>
> After enabling social login options, refer to [this article][auth0-login-actions-manual] to create an Action to set the deny/allow list.
[auth0-client-page]: https://manage.auth0.com/dashboard
[auth0-login-actions-manual]: https://auth0.com/blog/permit-or-deny-login-requests-using-auth0-actions/
[auth0-sso-integrations]: https://marketplace.auth0.com/features/sso-integrations

View file

@ -1,80 +0,0 @@
# 身份验证服务
LobeChat 支持配置外部身份验证服务,供企业 / 组织内部使用,统一管理用户授权,目前支持 [Auth0][auth0-client-page],本文将介绍如何配置身份验证服务。
### TOC
- [创建 Auth0 应用](#创建-auth0-应用)
- [新增用户](#新增用户)
- [配置环境变量](#配置环境变量)
- [进阶 - 连接现有的单点登录服务](#进阶---连接现有的单点登录服务)
- [进阶 - 配置社交登录](#进阶---配置社交登录)
## 创建 Auth0 应用
注册并登录 [Auth0][auth0-client-page]点击左侧导航栏的「Applications」切换到应用管理界面点击右上角「Create Application」以创建应用。
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/1b405347-f4c3-4c55-82f6-47116f2210d0)
填写你想向组织用户显示的应用名称可选择任意应用类型点击「Create」。
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/75c92f85-3ad3-4473-a9c6-e667e28d428d)
创建成功后点击相应的应用进入应用详情页切换到「Settings」标签页就可以看到相应的配置信息
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/a1ed996b-95ef-4b7d-a50d-b4666eccfecb)
在应用配置页面中,还需要配置 Allowed Callback URLs在此处填写 `http(s)://<your-domain>/api/auth/callback/auth0`
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/575f46aa-f485-49bd-8b90-dbb1ce1a5c1b)
> \[!NOTE]
>
> 可以在部署后再填写或修改 Allowed Callback URLs但是务必保证填写的 URL 与部署的 URL 一致
## 新增用户
点击左侧导航栏的「Users Management」进入用户管理界面可以为你的组织新建用户用以登录 LobeChat
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/3b8127ab-dc4f-4ff9-a4cb-dec3ef0295cc)
## 配置环境变量
在部署 LobeChat 时,你需要配置以下环境变量:
| 环境变量 | 类型 | 描述 | 默认值 | 示例 |
| --------------------- | ---- | --------------------------------------------------------------------------------------- | ------ | ------------------------------------------------------------------ |
| `ENABLE_OAUTH_SSO` | 必选 | 为 LobeChat 启用单点登录 (SSO)。设置为 `1` 以启用单点登录。 | - | `1` |
| `NEXTAUTH_SECRET` | 必选 | 用于加密 Auth.js 会话令牌的密钥。您可以使用以下命令生成秘钥: `openssl rand -base64 32` | - | `Tfhi2t2pelSMEA8eaV61KaqPNEndFFdMIxDaJnS1CUI=` |
| `AUTH0_CLIENT_ID` | 必选 | Auth0 应用程序的 Client ID | - | `evCnOJP1UX8FMnXR9Xkj5t0NyFn5p70P` |
| `AUTH0_CLIENT_SECRET` | 必选 | Auth0 应用程序的 Client Secret | - | `wnX7UbZg85ZUzF6ioxPLnJVEQa1Elbs7aqBUSF16xleBS5AdkVfASS49-fQIC8Rm` |
| `AUTH0_ISSUER` | 必选 | Auth0 应用程序的 Domain | - | `https://example.auth0.com` |
| `ACCESS_CODE` | 必选 | 添加访问此服务的密码,你可以设置一个足够长的随机密码以 “禁用” 访问码授权 | - | `awCT74``e3@09!` or `code1,code2,code3` |
> \[!NOTE]
>
> 部署成功后,用户将可以使用 Auth0 中配置的用户通过身份认证并使用 LobeChat。
## 进阶 - 连接现有的单点登录服务
如果你的企业或组织已有现有的统一身份认证设施,可在 Applications -> SSO Integrations 中,连接现有的单点登录服务。
Auth0 支持 Azure Active Directory, Slack, Google Workspace, Office 365, Zoom 等单点登录服务,详细支持列表可参考 [这里][auth0-sso-integrations]
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/32650f4f-d0b0-4843-b26d-d35bad11d8a3)
## 进阶 - 配置社交登录
如果你的企业或组织需要支持外部人员登录,可以在 Authentication -> Social 中,配置社交登录服务。
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/7b6f6a6c-2686-49d8-9dbd-0516053f1efa)
> \[!NOTE]
>
> 配置社交登录服务默认会允许所有人通过认证,这可能会导致 LobeChat 被外部人员滥用。如果你需要限制登录人员,务必配置阻止策略
>
> 请在打开社交登录选项后,参考 [这篇文章][auth0-login-actions-manual] 创建 Action 来设置阻止 / 允许 列表
[auth0-client-page]: https://manage.auth0.com/dashboard
[auth0-login-actions-manual]: https://auth0.com/blog/permit-or-deny-login-requests-using-auth0-actions/
[auth0-sso-integrations]: https://marketplace.auth0.com/features/sso-integrations

View file

@ -1,87 +0,0 @@
# Frequently Asked Questions
## Configuring the `OPENAI_PROXY_URL` Environment Variable but Receiving an Empty Response
### Problem Description
After configuring the `OPENAI_PROXY_URL` environment variable, you may encounter a situation where the response from the AI is empty. This could be due to an incorrect configuration of the `OPENAI_PROXY_URL`.
### Solution
Recheck and confirm if the `OPENAI_PROXY_URL` is set correctly, including the correct addition of the `/v1` suffix (if required).
### Related Discussion Links
- [Why is the response blank after installing and configuring Docker and environment variables?](https://github.com/lobehub/lobe-chat/discussions/623)
- [Reasons for errors when using third-party APIs](https://github.com/lobehub/lobe-chat/discussions/734)
- [No response from the chat when the proxy server address is filled](https://github.com/lobehub/lobe-chat/discussions/1065)
If the issue persists, it is recommended to raise the problem in the community with relevant logs and configuration information for other developers or maintainers to provide assistance.
## When using a proxy(e.g. Surge), I encounter the `UNABLE_TO_VERIFY_LEAF_SIGNATURE` error
### Problem Description
When making network requests during private deployment, certificate validation errors may occur. The error messages may be as follows:
```
[TypeError: fetch failed] {
cause: [Error: unable to verify the first certificate] {
code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE'
}
}
```
Or:
```
{
"endpoint": "https://api.openai.com/v1",
"error": {
"cause": {
"code": "UNABLE_TO_VERIFY_LEAF_SIGNATURE"
}
}
}
```
This problem typically occurs when using a proxy server with a self-signed certificate or a man-in-the-middle certificate that is not trusted by Node.js by default.
### Solution
To resolve this issue, you can add an environment variable to bypass Node.js certificate validation when starting the application. The specific approach is to include `NODE_TLS_REJECT_UNAUTHORIZED=0` in the startup command. For example:
```bash
NODE_TLS_REJECT_UNAUTHORIZED=0 npm run start
```
Alternatively, when running in a Docker container, you can set the environment variable in the Dockerfile or docker-compose.yml:
```dockerfile
# In the Dockerfile
ENV NODE_TLS_REJECT_UNAUTHORIZED=0
```
```yaml
# In the docker-compose.yml
environment:
- NODE_TLS_REJECT_UNAUTHORIZED=0
```
Example Docker run command:
```bash
docker run -e NODE_TLS_REJECT_UNAUTHORIZED=0 <other parameters> <image name>
```
Please note that this method reduces security because it allows Node.js to accept unverified certificates. Therefore, it is only recommended for use in private deployments with fully trusted network environments, and the default certificate validation settings should be restored after resolving the certificate issue.
### More Secure Alternatives
If possible, it is recommended to address certificate issues using the following methods:
1. Ensure that all man-in-the-middle certificates are correctly installed on the proxy server and the corresponding clients.
2. Replace self-signed or man-in-the-middle certificates with valid certificates issued by trusted certificate authorities.
3. Properly configure the certificate chain in the code to ensure that Node.js can validate to the root certificate.
Implementing these methods can resolve certificate validation issues without sacrificing security.

View file

@ -1,54 +0,0 @@
# Deploying with Azure OpenAI
LobeChat supports using [Azure OpenAI][azure-openai-url] as the model service provider for OpenAI. This document will guide you through the configuration of Azure OpenAI.
#### TOC
- [Usage Limitations](#usage-limitations)
- [Configuration in the Interface](#configuration-in-the-interface)
- [Configuration at Deployment](#configuration-at-deployment)
## Usage Limitations
Considering development costs ([#178][rfc]), the current version of LobeChat does not fully conform to Azure OpenAI's implementation model. Instead, it adopts a solution based on `openai` that is compatible with Azure OpenAI. This brings about the following limitations:
- You can only choose one between OpenAI and Azure OpenAI. Once you enable Azure OpenAI, you will not be able to use OpenAI as the model service provider.
- LobeChat requires deployment names to be the same as the model names in order to function properly. For example, the deployment name for the `gpt-35-turbo` model must be `gpt-35-turbo`, otherwise LobeChat will not be able to match the model correctly.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/267082091-d89d53d3-1c8c-40ca-ba15-0a9af2a79264.png)
- Due to the complexity of integrating the Azure OpenAI SDK, it is currently impossible to query the model list of configured resources.
## Configuration in the Interface
Click on "Operation" - "Settings" in the bottom left corner, switch to the "Language Model" tab and enable the "Azure OpenAI" switch to start using Azure OpenAI.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/267083420-422a3714-627e-4bef-9fbc-141a2a8ca916.png)
You can fill in the corresponding configuration items as needed:
- **APIKey**: The API key you applied for on the Azure OpenAI account page, which can be found in the "Keys and Endpoints" section.
- **API Address**: Azure API address, which can be found in the "Keys and Endpoints" section when checking resources from the Azure portal.
- **Azure Api Version**: The API version of Azure, which follows the YYYY-MM-DD format, refer to the [latest version][azure-api-version-url].
After completing the above field configuration, click on "Check". If the prompt says "Check Passed", it means the configuration was successful.
<br/>
## Configuration at Deployment
If you want the deployed version to be directly configured with Azure OpenAI for end users to use immediately, you need to configure the following environment variables at deployment:
| Environment Variable | Required | Description | Default Value | Example |
| -------------------- | -------- | -------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ | -------------------------------------------------------------- |
| `USE_AZURE_OPENAI` | Yes | Set this value to `1` to enable Azure OpenAI configuration | - | `1` |
| `AZURE_API_KEY` | Yes | This is the API key you applied for on the Azure OpenAI account page | - | `c55168be3874490ef0565d9779ecd5a6` |
| `OPENAI_PROXY_URL` | Yes | Azure API address, can be found in the "Keys and Endpoints" section | - | `https://docs-test-001.openai.azure.com` |
| `AZURE_API_VERSION` | No | Azure's API version, follows the YYYY-MM-DD format | 2023-08-01-preview | `2023-05-15`, refer to [latest version][azure-api-version-url] |
| `ACCESS_CODE` | No | Add a password to access this service; you can set a long password to avoid leaking. If this value contains a comma, it is a password array. | - | `awCT74` or `e3@09!` or `code1,code2,code3` |
> \[!NOTE]
>
> When you enable `USE_AZURE_OPENAI` on the server side, users will not be able to modify and use the OpenAI key in the front-end configuration.
[azure-api-version-url]: https://learn.microsoft.com/zh-cn/azure/ai-services/openai/reference#chat-completions
[azure-openai-url]: https://learn.microsoft.com/zh-cn/azure/ai-services/openai/concepts/models
[rfc]: https://github.com/lobehub/lobe-chat/discussions/178

View file

@ -1,219 +0,0 @@
# Docker Deployment Guide
[![][docker-release-shield]][docker-release-link]
[![][docker-size-shield]][docker-size-link]
[![][docker-pulls-shield]][docker-pulls-link]
We provide [Docker Images][docker-release-link] for you to deploy LobeChat service on your private device.
#### TOC
- [Install Docker container environment](#install-docker-container-environment)
- [Deploy container image](#deploy-container-image)
- [`A` Command deployment (recommended)](#a-command-deployment-recommended)
- [`B` Docker Compose](#b-docker-compose)
## Install Docker container environment
If already installed, skip this step.
**Ubuntu:**
```fish
$ apt install docker.io
```
**CentOS:**
```fish
$ yum install docker
```
## Deploy container image
### `A` Command deployment (recommended)
Use the following command to start LobeChat service with one click:
```fish
$ docker run -d -p 3210:3210 \
-e OPENAI_API_KEY=sk-xxxx \
-e ACCESS_CODE=lobe66 \
--name lobe-chat \
lobehub/lobe-chat
```
> \[!NOTE]
>
> - The default mapped port is `3210`. Make sure it is not occupied or manually change the port mapping.
> - Replace `sk-xxxx` in the above command with your own OpenAI API Key.
> - The official Docker doesn't set password by default. Please add your own password to improve security.
> - For a complete list of environment variables supported by LobeChat, please refer to the [Environment Variables](https://github.com/lobehub/lobe-chat/wiki/Environment-Variable.zh-CN) section.
> \[!WARNING]
>
> If the architecture of your **deployed server differs from the container architecture**, you may need to perform cross-compilation for **Sharp**. For further details, please refer to the documentation on [Sharp Cross-platform](https://sharp.pixelplumbing.com/install#cross-platform).
#### Use a proxy address
If you need to use OpenAI service through a proxy, you can use the `OPENAI_PROXY_URL` environment variable to configure the proxy address:
```fish
$ docker run -d -p 3210:3210 \
-e OPENAI_API_KEY=sk-xxxx \
-e OPENAI_PROXY_URL=https://api-proxy.com/v1 \
-e ACCESS_CODE=lobe66 \
--name lobe-chat \
lobehub/lobe-chat
```
> \[!NOTE]
>
> As the official Docker image build takes about half an hour, if there is a "update available" prompt after updating deployment, wait for the image to finish building before deploying again.
#### Crontab Auto-Update Script
If you want to automatically get the latest images, you can do the following.
First, create a `lobe.env` configuration file with various environment variables, for example:
```env
OPENAI_API_KEY=sk-xxxx
OPENAI_PROXY_URL=https://api-proxy.com/v1
ACCESS_CODE=arthals2333
CUSTOM_MODELS=-gpt-4,-gpt-4-32k,-gpt-3.5-turbo-16k,gpt-3.5-turbo-1106=gpt-3.5-turbo-16k,gpt-4-0125-preview=gpt-4-turbo,gpt-4-vision-preview=gpt-4-vision
```
Then, you can use the following script to automatically update:
```bash
#!/bin/bash
# auto-update-lobe-chat.sh
# Set proxy (optional)
export https_proxy=http://127.0.0.1:7890 http_proxy=http://127.0.0.1:7890 all_proxy=socks5://127.0.0.1:7890
# Pull the latest image and store the output in a variable
output=$(docker pull lobehub/lobe-chat:latest 2>&1)
# Check if the pull command executed successfully
if [ $? -ne 0 ]; then
exit 1
fi
# Check if the output contains a specific string
echo "$output" | grep -q "Image is up to date for lobehub/lobe-chat:latest"
# If the image is already up to date, then do nothing
if [ $? -eq 0 ]; then
exit 0
fi
echo "Detected Lobe-Chat update"
# Remove old container
echo "Removed: $(docker rm -f Lobe-Chat)"
# Run new container
echo "Started: $(docker run -d --network=host --env-file /path/to/lobe.env --name=Lobe-Chat --restart=always lobehub/lobe-chat)"
# Print the update time and version
echo "Update time: $(date)"
echo "Version: $(docker inspect lobehub/lobe-chat:latest | grep 'org.opencontainers.image.version' | awk -F'"' '{print $4}')"
# Clean up unused images
docker images | grep 'lobehub/lobe-chat' | grep -v 'latest' | awk '{print $3}' | xargs -r docker rmi > /dev/null 2>&1
echo "Removed old images."
```
This script can be used in Crontab, but make sure your Crontab can find the correct Docker command. It's recommended to use absolute paths.
Configure Crontab to execute the script every 5 minutes:
```bash
*/5 * * * * /path/to/auto-update-lobe-chat.sh >> /path/to/auto-update-lobe-chat.log 2>&1
```
### `B` Docker Compose
The configuration file for using `docker-compose` is as follows:
```yml
version: '3.8'
services:
lobe-chat:
image: lobehub/lobe-chat
container_name: lobe-chat
restart: always
ports:
- '3210:3210'
environment:
OPENAI_API_KEY: sk-xxxx
OPENAI_PROXY_URL: https://api-proxy.com/v1
ACCESS_CODE: lobe66
```
#### Crontab Auto-Update Script
Similarly, you can use the following script to update Lobe Chat automatically. When using `Docker Compose`, environment variables do not need additional configuration.
```bash
#!/bin/bash
# auto-update-lobe-chat.sh
# Set proxy (optional)
export https_proxy=http://127.0.0.1:7890 http_proxy=http://127.0.0.1:7890 all_proxy=socks5://127.0.0.1:7890
# Pull the latest image and store the output in a variable
output=$(docker pull lobehub/lobe-chat:latest 2>&1)
# Check if the pull command executed successfully
if [ $? -ne 0 ]; then
exit 1
fi
# Check if the output contains a specific string
echo "$output" | grep -q "Image is up to date for lobehub/lobe-chat:latest"
# If the image is already up to date, then do nothing
if [ $? -eq 0 ]; then
exit 0
fi
echo "Detected Lobe-Chat update"
# Remove old container
echo "Removed: $(docker rm -f Lobe-Chat)"
# Maybe you need to enter the directory where `docker-compose.yml` is located first
# cd /path/to/docker-compose-folder
# Run new container
echo "Started: $(docker-compose up)"
# Print the update time and version
echo "Update time: $(date)"
echo "Version: $(docker inspect lobehub/lobe-chat:latest | grep 'org.opencontainers.image.version' | awk -F'"' '{print $4}')"
# Clean up unused images
docker images | grep 'lobehub/lobe-chat' | grep -v 'latest' | awk '{print $3}' | xargs -r docker rmi > /dev/null 2>&1
echo "Removed old images."
```
This script can also be used in Crontab, but make sure your Crontab can find the correct Docker command. It's recommended to use absolute paths.
Configure Crontab to execute the script every 5 minutes:
```bash
*/5 * * * * /path/to/auto-update-lobe-chat.sh >> /path/to/auto-update-lobe-chat.log 2>&1
```
<!-- LINK GROUP -->
[docker-pulls-link]: https://hub.docker.com/r/lobehub/lobe-chat
[docker-pulls-shield]: https://img.shields.io/docker/pulls/lobehub/lobe-chat?color=45cc11&labelColor=black&style=flat-square
[docker-release-link]: https://hub.docker.com/r/lobehub/lobe-chat
[docker-release-shield]: https://img.shields.io/docker/v/lobehub/lobe-chat?color=369eff&label=docker&labelColor=black&logo=docker&logoColor=white&style=flat-square
[docker-size-link]: https://hub.docker.com/r/lobehub/lobe-chat
[docker-size-shield]: https://img.shields.io/docker/image-size/lobehub/lobe-chat?color=369eff&labelColor=black&style=flat-square

View file

@ -1,217 +0,0 @@
# Docker 部署指引
[![][docker-release-shield]][docker-release-link]
[![][docker-size-shield]][docker-size-link]
[![][docker-pulls-shield]][docker-pulls-link]
我们提供了 [Docker 镜像][docker-release-link],供你在自己的私有设备上部署 LobeChat 服务
#### TOC
- [安装 Docker 容器环境](#安装-docker-容器环境)
- [部署容器镜像](#部署容器镜像)
- [`A` 指令部署(推荐)](#a-指令部署推荐)
- [`B` Docker Compose](#b-docker-compose)
## 安装 Docker 容器环境
如果已安装,请跳过此步
**Ubuntu**
```fish
$ apt install docker.io
```
**CentOS**
```fish
$ yum install docker
```
## 部署容器镜像
### `A` 指令部署(推荐)
使用以下命令即可使用一键启动 LobeChat 服务:
```fish
$ docker run -d -p 3210:3210 \
-e OPENAI_API_KEY=sk-xxxx \
-e ACCESS_CODE=lobe66 \
--name lobe-chat \
lobehub/lobe-chat
```
> \[!NOTE]
>
> - 默认映射端口为 `3210`, 请确保未被占用或手动更改端口映射
> - 使用你的 OpenAI API Key 替换上述命令中的 `sk-xxxx`
> - 官方 Docker 镜像中未设定密码,强烈建议添加密码以提升安全性,否则你可能会遇到 [My API Key was stolen!!!](https://github.com/lobehub/lobe-chat/issues/1123) 这样的情况
> - LobeChat 支持的完整环境变量列表请参考 [环境变量](https://github.com/lobehub/lobe-chat/wiki/Environment-Variable.zh-CN) 部分
> \[!WARNING]
>
> 注意,当**部署架构与镜像的不一致时**,需要对 **Sharp** 进行交叉编译,详见 [Sharp 交叉编译](https://sharp.pixelplumbing.com/install#cross-platform)
#### 使用代理地址
如果你需要通过代理使用 OpenAI 服务,你可以使用 `OPENAI_PROXY_URL` 环境变量来配置代理地址:
```fish
$ docker run -d -p 3210:3210 \
-e OPENAI_API_KEY=sk-xxxx \
-e OPENAI_PROXY_URL=https://api-proxy.com/v1 \
-e ACCESS_CODE=lobe66 \
--name lobe-chat \
lobehub/lobe-chat
```
> \[!NOTE]
>
> 由于官方的 Docker 镜像构建大约需要半小时左右,如果在更新部署后会出现「存在更新」的提示,可以等待镜像构建完成后再次部署。
#### Crontab 自动更新脚本
如果你想自动获得最新的镜像,你可以如下操作。
首先,新建一个 `lobe.env` 配置文件,内容为各种环境变量,例如:
```env
OPENAI_API_KEY=sk-xxxx
OPENAI_PROXY_URL=https://api-proxy.com/v1
ACCESS_CODE=arthals2333
CUSTOM_MODELS=-gpt-4,-gpt-4-32k,-gpt-3.5-turbo-16k,gpt-3.5-turbo-1106=gpt-3.5-turbo-16k,gpt-4-0125-preview=gpt-4-turbo,gpt-4-vision-preview=gpt-4-vision
```
然后,你可以使用以下脚本来自动更新:
```bash
#!/bin/bash
# auto-update-lobe-chat.sh
# 设置代理(可选)
export https_proxy=http://127.0.0.1:7890 http_proxy=http://127.0.0.1:7890 all_proxy=socks5://127.0.0.1:7890
# 拉取最新的镜像并将输出存储在变量中
output=$(docker pull lobehub/lobe-chat:latest 2>&1)
# 检查拉取命令是否成功执行
if [ $? -ne 0 ]; then
exit 1
fi
# 检查输出中是否包含特定的字符串
echo "$output" | grep -q "Image is up to date for lobehub/lobe-chat:latest"
# 如果镜像已经是最新的,则不执行任何操作
if [ $? -eq 0 ]; then
exit 0
fi
echo "Detected Lobe-Chat update"
# 删除旧的容器
echo "Removed: $(docker rm -f Lobe-Chat)"
# 运行新的容器
echo "Started: $(docker run -d --network=host --env-file /path/to/lobe.env --name=Lobe-Chat --restart=always lobehub/lobe-chat)"
# 打印更新的时间和版本
echo "Update time: $(date)"
echo "Version: $(docker inspect lobehub/lobe-chat:latest | grep 'org.opencontainers.image.version' | awk -F'"' '{print $4}')"
# 清理不再使用的镜像
docker images | grep 'lobehub/lobe-chat' | grep -v 'latest' | awk '{print $3}' | xargs -r docker rmi > /dev/null 2>&1
echo "Removed old images."
```
此脚本可以在 Crontab 中使用,但请确认你的 Crontab 可以找到正确的 Docker 命令。建议使用绝对路径。
配置 Crontab每 5 分钟执行一次脚本:
```bash
*/5 * * * * /path/to/auto-update-lobe-chat.sh >> /path/to/auto-update-lobe-chat.log 2>&1
```
### `B` Docker Compose
使用 `docker-compose` 时配置文件如下:
```yml
version: '3.8'
services:
lobe-chat:
image: lobehub/lobe-chat
container_name: lobe-chat
restart: always
ports:
- '3210:3210'
environment:
OPENAI_API_KEY: sk-xxxx
OPENAI_PROXY_URL: https://api-proxy.com/v1
ACCESS_CODE: lobe66
```
#### Crontab 自动更新脚本
类似地,你可以使用以下脚本来自动更新 Lobe Chat使用 `Docker Compose` 时,环境变量无需额外配置。
```bash
#!/bin/bash
# auto-update-lobe-chat.sh
# 设置代理(可选)
export https_proxy=http://127.0.0.1:7890 http_proxy=http://127.0.0.1:7890 all_proxy=socks5://127.0.0.1:7890
# 拉取最新的镜像并将输出存储在变量中
output=$(docker pull lobehub/lobe-chat:latest 2>&1)
# 检查拉取命令是否成功执行
if [ $? -ne 0 ]; then
exit 1
fi
# 检查输出中是否包含特定的字符串
echo "$output" | grep -q "Image is up to date for lobehub/lobe-chat:latest"
# 如果镜像已经是最新的,则不执行任何操作
if [ $? -eq 0 ]; then
exit 0
fi
echo "Detected Lobe-Chat update"
# 删除旧的容器
echo "Removed: $(docker rm -f Lobe-Chat)"
# 也许需要先进入 `docker-compose.yml` 所在的目录
# cd /path/to/docker-compose-folder
# 运行新的容器
echo "Started: $(docker-compose up)"
# 打印更新的时间和版本
echo "Update time: $(date)"
echo "Version: $(docker inspect lobehub/lobe-chat:latest | grep 'org.opencontainers.image.version' | awk -F'"' '{print $4}')"
# 清理不再使用的镜像
docker images | grep 'lobehub/lobe-chat' | grep -v 'latest' | awk '{print $3}' | xargs -r docker rmi > /dev/null 2>&1
echo "Removed old images."
```
此脚本亦可以在 Crontab 中使用,但请确认你的 Crontab 可以找到正确的 Docker 命令。建议使用绝对路径。
配置 Crontab每 5 分钟执行一次脚本:
```bash
*/5 * * * * /path/to/auto-update-lobe-chat.sh >> /path/to/auto-update-lobe-chat.log 2>&1
```
[docker-pulls-link]: https://hub.docker.com/r/lobehub/lobe-chat
[docker-pulls-shield]: https://img.shields.io/docker/pulls/lobehub/lobe-chat?color=45cc11&labelColor=black&style=flat-square
[docker-release-link]: https://hub.docker.com/r/lobehub/lobe-chat
[docker-release-shield]: https://img.shields.io/docker/v/lobehub/lobe-chat?color=369eff&label=docker&labelColor=black&logo=docker&logoColor=white&style=flat-square
[docker-size-link]: https://hub.docker.com/r/lobehub/lobe-chat
[docker-size-shield]: https://img.shields.io/docker/image-size/lobehub/lobe-chat?color=369eff&labelColor=black&style=flat-square

View file

@ -1,362 +0,0 @@
# Environment Variables
LobeChat provides additional configuration options during deployment, which can be set using environment variables
#### TOC
- [General Variables](#general-variables)
- [`ACCESS_CODE`](#access_code)
- [`ENABLE_OAUTH_SSO`](#enable_oauth_sso)
- [`NEXT_PUBLIC_BASE_PATH`](#next_public_base_path)
- [Authentication Service Providers](#authentication-service-providers)
- [Common Settings](#common-settings)
- [Auth0](#auth0)
- [Model Service Providers](#model-service-providers)
- [OpenAI](#openai)
- [Azure OpenAI](#azure-openai)
- [Zhipu AI](#zhipu-ai)
- [Moonshot AI](#moonshot-ai)
- [Google AI](#google-ai)
- [AWS Bedrock](#aws-bedrock)
- [Ollama](#ollama)
- [Plugin Service](#plugin-service)
- [`PLUGINS_INDEX_URL`](#plugins_index_url)
- [`PLUGIN_SETTINGS`](#plugin_settings)
- [Agent Service](#agent-service)
- [`AGENTS_INDEX_URL`](#agents_index_url)
- [Data Analytics](#data-analytics)
- [Vercel Analytics](#vercel-analytics)
- [Posthog Analytics](#posthog-analytics)
- [Umami Analytics](#umami-analytics)
## General Variables
### `ACCESS_CODE`
- Type: Optional
- Description: Add a password to access the LobeChat service; you can set a long password to avoid leaking. If this value contains a comma, it is a password array.
- Default: `-`
- Example: `awCTe)re_r74` or `rtrt_ewee3@09!` or `code1,code2,code3`
### `ENABLE_OAUTH_SSO`
- Type: Optional
- Description: Enable OAuth single sign-on (SSO) for LobeChat. Set to `1` to enable OAuth SSO. See [Authentication Service Providers](#authentication-service-providers) for more details.
- Default: `-`
- Example: `1`
### `NEXT_PUBLIC_BASE_PATH`
- TypeOptional
- Descriptionadd `basePath` for LobeChat
- Default: `-`
- Example: `/test`
#### `DEFAULT_AGENT_CONFIG`
- Type: Optional
- Description: Used to configure the default configuration of the LobeChat default assistant. It supports various data types and structures, including key-value pairs, nested fields, array values, etc.
- Default Value: `-`
- Example: `'model=gpt-4-1106-preview;params.max_tokens=300;plugins=search-engine,lobe-image-designer`
`DEFAULT_AGENT_CONFIG` is used to configure the default configuration of the LobeChat default agent. It supports various data types and structures, including key-value pairs, nested fields, array values, etc. The table below provides detailed explanations of the configuration options, examples, and corresponding explanations for the `DEFAULT_AGENT_CONFIG` environment variable:
| Configuration Type | Example | Explanation |
| ----------------------- | -------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------- |
| Basic Key-Value Pair | `model=gpt-4` | Set the model to `gpt-4`. |
| Nested Field | `tts.sttLocale=en-US` | Set the language region for the text-to-speech service to `en-US`. |
| Array | `plugins=search-engine,lobe-image-designer` | Enable the `search-engine` and `lobe-image-designer` plugins. |
| Chinese Comma | `plugins=search-enginelobe-image-designer` | The same as above, demonstrating support for Chinese comma separation. |
| Multiple Configurations | `model=glm-4;provider=zhipu` | Set the model to `glm-4` and the model provider to `zhipu`. |
| Numeric Value | `params.max_tokens=300` | Set the maximum number of tokens to `300`. |
| Boolean Value | `enableAutoCreateTopic=true` | Enable automatic topic creation. |
| Special Characters | `inputTemplate="Hello; I am a bot;"` | Set the input template to `Hello; I am a bot;`. |
| Error Handling | `model=gpt-4;maxToken` | Ignore the invalid entry `maxToken` and only parse out `model=gpt-4`. |
| Value Overriding | `model=gpt-4;model=gpt-4-1106-preview` | If the key is duplicated, use the value that appears last, in this case, the value of `model` is `gpt-4-1106-preview`. |
Related discussions:
- [\[RFC\] 022 - Default Helper Parameters for Environment Variable Configuration](https://github.com/lobehub/lobe-chat/discussions/913)
## Authentication Service Providers
### Common Settings
#### `NEXTAUTH_SECRET`
- Type: Required
- Description: The secret key used to encrypt the Auth.js session token. You can generate a secret key using the following command: `openssl rand -base64 32`
- Default: `-`
- Example: `Tfhi2t2pelSMEA8eaV61KaqPNEndFFdMIxDaJnS1CUI=`
### Auth0
> \[!NOTE]
>
> We only support the Auth0 authentication service provider at the moment. If you need to use other authentication service providers, you can submit a feature request or pull request.
#### `AUTH0_CLIENT_ID`
- Type: Required
- Description: The Client ID of the Auth0 application, you can go [here][auth0-client-page] and navigate to the application settings to view
- Default: `-`
- Example: `evCnOJP1UX8FMnXR9Xkj5t0NyFn5p70P`
#### `AUTH0_CLIENT_SECRET`
- Type: Required
- Description: The Client Secret of the Auth0 application
- Default: `-`
- Example: `wnX7UbZg85ZUzF6ioxPLnJVEQa1Elbs7aqBUSF16xleBS5AdkVfASS49-fQIC8Rm`
#### `AUTH0_ISSUER`
- Type: Required
- Description: The issuer/domain of the Auth0 application
- Default: `-`
- Example: `https://example.auth0.com`
## Model Service Providers
### OpenAI
#### `OPENAI_API_KEY`
- Type: Required
- Description: This is the API key you apply for on the OpenAI account page, you can go [here][openai-api-page] to view
- Default: `-`
- Example: `sk-xxxxxx...xxxxxx`
#### `OPENAI_PROXY_URL`
- Type: Optional
- Description: If you manually configure the OpenAI interface proxy, you can use this configuration item to override the default OpenAI API request base URL
- Default: `https://api.openai.com/v1`
- Example: `https://api.chatanywhere.cn` or `https://aihubmix.com/v1`
> \[!NOTE]
>
> Please check the request suffix of your proxy service provider. Some proxy service providers may add `/v1` to the request suffix, while others may not.
> If you find that the AI returns an empty message during testing, try adding the `/v1` suffix and retrying.
Whether to fill in `/v1` is closely related to the model service provider. For example, the default address of OpenAI is `api.openai.com/v1`. If your proxy forwards this interface, you can directly fill in `proxy.com`. However, if the model service provider directly forwards the `api.openai.com` domain, you need to add the `/v1` URL by yourself.
Related discussions:
- [Why is the return value blank after installing Docker, configuring the environment variables?](https://github.com/lobehub/lobe-chat/discussions/623)
- [Reasons for errors when using third-party interfaces](https://github.com/lobehub/lobe-chat/discussions/734)
- [No response when filling in the proxy server address for chatting](https://github.com/lobehub/lobe-chat/discussions/1065)
#### `CUSTOM_MODELS`
- Type: Optional
- Description: Used to control the model list. Use `+` to add a model, `-` to hide a model, and `model_name=display_name` to customize the display name of a model, separated by commas.
- Default: `-`
- Example: `+qwen-7b-chat,+glm-6b,-gpt-3.5-turbo,gpt-4-0125-preview=gpt-4-turbo`
The above example adds `qwen-7b-chat` and `glm-6b` to the model list, removes `gpt-3.5-turbo` from the list, and displays the model name `gpt-4-0125-preview` as `gpt-4-turbo`. If you want to disable all models first and then enable specific models, you can use `-all,+gpt-3.5-turbo`, which means only `gpt-3.5-turbo` will be enabled.
You can find all current model names in [modelProviders](https://github.com/lobehub/lobe-chat/tree/main/src/config/modelProviders).
### Azure OpenAI
If you need to use Azure OpenAI to provide model services, you can refer to the [Deploy with Azure OpenAI](Deploy-with-Azure-OpenAI.zh-CN.md) section for detailed steps. Here are the environment variables related to Azure OpenAI.
#### `USE_AZURE_OPENAI`
- Type: Optional
- Description: Set this value to `1` to enable Azure OpenAI configuration
- Default: `-`
- Example: `1`
#### `AZURE_API_KEY`
- Type: Optional
- Description: This is the API key you apply for on the Azure OpenAI account page
- Default: `-`
- Example: `c55168be3874490ef0565d9779ecd5a6`
#### `AZURE_API_VERSION`
- Type: Optional
- Description: Azure's API version, following the YYYY-MM-DD format
- Default: `2023-08-01-preview`
- Example: `2023-05-15`, refer to [latest version][azure-api-verion-url]
<br/>
### Zhipu AI
#### `ZHIPU_API_KEY`
- Type: Required
- Description: This is the API key you applied for in the Zhipu AI service
- Default Value: -
- Example: `4582d332441a313f5c2ed9824d1798ca.rC8EcTAhgbOuAuVT`
### Moonshot AI
#### `MOONSHOT_API_KEY`
- Type: Required
- Description: This is the API key you applied for in the Zhipu AI service
- Default Value: -
- Example: `Y2xpdGhpMzNhZXNoYjVtdnZjMWc6bXNrLWIxQlk3aDNPaXpBWnc0V1RaMDhSRmRFVlpZUWY=`
### Google AI
#### `GOOGLE_API_KEY`
- Type: Required
- Description: This is the API key you applied for on Google Cloud Platform, used to access Google AI services
- Default Value: -
- Example: `AIraDyDwcw254kwJaGjI9wwaHcdDCS__Vt3xQE`
### AWS Bedrock
#### `AWS_ACCESS_KEY_ID`
- Type: Required
- Description: The access key ID for AWS service authentication
- Default Value: -
- Example: `AKIA5STVRLFSB4S9HWBR`
#### `AWS_SECRET_ACCESS_KEY`
- Type: Required
- Description: The secret key for AWS service authentication
- Default Value: -
- Example: `Th3vXxLYpuKcv2BARktPSTPxx+jbSiFT6/0w7oEC`
#### `AWS_REGION`
- Type: Optional
- Description: The region setting for AWS services
- Default Value: `us-east-1`
- Example: `us-east-1`
### Ollama
#### `OLLAMA_PROXY_URL`
- Type: Optional
- Description: To enable the Ollama service provider, if set up which will appear as selectable model card in the language model setting page, you can also specify a custom language model.
- Default: -
- Example: `http://127.0.0.1:11434/v1`
## Plugin Service
### `PLUGINS_INDEX_URL`
- Type: Optional
- Description: The index address of the LobeChat plugin market. If you have deployed the plugin market service yourself, you can use this variable to override the default plugin market address
- Default: `https://chat-plugins.lobehub.com`
### `PLUGIN_SETTINGS`
- Type: Optional
- Description: Used to set the plugin settings, the format is `plugin-identifier:key1=value1;key2=value2`, multiple settings fields are separated by semicolons `;`, multiple plugin settings are separated by commas `,`.
- Default: `-`
- Example`search-engine:SERPAPI_API_KEY=xxxxx,plugin-2:key1=value1;key2=value2`
The above example adds `search-engine` plugin settings, and sets the `SERPAPI_API_KEY` of the `search-engine` plugin to `xxxxx`, and sets the `key1` of the `plugin-2` plugin to `value1`, and `key2` to `value2`. The generated plugin settings configuration is as follows:
```json
{
"plugin-2": {
"key1": "value1",
"key2": "value2"
},
"search-engine": {
"SERPAPI_API_KEY": "xxxxx"
}
}
```
## Agent Service
### `AGENTS_INDEX_URL`
- Type: Optional
- Description: The index address of the LobeChat role market. If you have deployed the role market service yourself, you can use this variable to override the default plugin market address
- Default: `https://chat-agents.lobehub.com`
<br/>
## Data Analytics
### Vercel Analytics
#### `NEXT_PUBLIC_ANALYTICS_VERCEL`
- Type: Optional
- Description: Environment variable to enable [Vercel Analytics][vercel-analytics-url]. Set to `1` to enable Vercel Analytics.
- Default: `-`
- Example: `1`
#### `NEXT_PUBLIC_VERCEL_DEBUG`
- Type: Optional
- Description: Enable debug mode for Vercel Analytics.
- Default: `-`
- Example: `1`
### Posthog Analytics
#### `NEXT_PUBLIC_ANALYTICS_POSTHOG`
- Type: Optional
- Description: Environment variable to enable [PostHog Analytics][posthog-analytics-url]. Set to `1` to enable PostHog Analytics.
- Default: `-`
- Example: `1`
#### `NEXT_PUBLIC_POSTHOG_KEY`
- Type: Optional
- Description: Set the PostHog project key.
- Default: -
- Example: `phc_xxxxxxxx`
#### `NEXT_PUBLIC_POSTHOG_HOST`
- Type: Optional
- Description: Set the deployment address of the PostHog service. Default is the official SAAS address.
- Default: `https://app.posthog.com`
- Example: `https://example.com`
#### `NEXT_PUBLIC_POSTHOG_DEBUG`
- Type: Optional
- Description: Enable debug mode for PostHog.
- Default: -
- Example: `1`
### Umami Analytics
#### `NEXT_PUBLIC_ANALYTICS_UMAMI`
- Type: Optional
- Description: Environment variable to enable [Umami Analytics][umami-analytics-url]. Set to `1` to enable Umami Analytics.
- Default: `-`
- Example: `1`
#### `NEXT_PUBLIC_UMAMI_SCRIPT_URL`
- Type: Optional
- Description: Set the url of the umami script. Default is the script address of Umami Cloud.
- Default: `https://analytics.umami.is/script.js`
- Example: `https://umami.your-site.com/script.js`
#### `NEXT_PUBLIC_UMAMI_WEBSITE_ID`
- Type: Required
- Description: The website ID in umami
- Default: `-`
- Example: `E738D82A-EE9E-4806-A81F-0CA3CAE57F65`
[auth0-client-page]: https://manage.auth0.com/dashboard
[azure-api-verion-url]: https://docs.microsoft.com/zh-cn/azure/developer/javascript/api-reference/es-modules/azure-sdk/ai-translation/translationconfiguration?view=azure-node-latest#api-version
[openai-api-page]: https://platform.openai.com/account/api-keys
[posthog-analytics-url]: https://posthog.com
[umami-analytics-url]: https://umami.is
[vercel-analytics-url]: https://vercel.com/analytics

View file

@ -1,62 +0,0 @@
# Upstream Sync
## TOC
- [`A` Vercel / Zeabur Deployment](#a-vercel--zeabur-deployment)
- [Enabling Automatic Updates](#enabling-automatic-updates)
- [`B` Docker Deployment](#b-docker-deployment)
## `A` Vercel / Zeabur Deployment
If you have deployed your own project following the one-click deployment steps in the README, you might encounter constant prompts indicating "updates available". This is because Vercel defaults to creating a new project instead of forking this one, resulting in an inability to accurately detect updates. We suggest you redeploy using the following steps:
- Remove the original repository;
- Use the <kbd>Fork</kbd> button at the top right corner of the page to fork this project;
- Re-select and deploy on `Vercel`.
## Enabling Automatic Updates
> \[!NOTE]
>
> If you encounter an error executing Upstream Sync, manually Sync Fork once
Once you have forked the project, due to Github restrictions, you will need to manually enable Workflows on the Actions page of your forked project and activate the Upstream Sync Action. Once enabled, you can set up hourly automatic updates.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/266985117-4d48fe7b-0412-4667-8129-b25ebcf2c9de.png)
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/266985177-7677b4ce-c348-4145-9f60-829d448d5be6.png)
## `B` Docker Deployment
Upgrading the Docker deployment version is very simple, just redeploy the latest image of LobeChat. Here are the instructions to perform these steps:
1. Stop and delete the currently running LobeChat container (assuming the name of the LobeChat container is `lobe-chat`):
```fish
docker stop lobe-chat
docker rm lobe-chat
```
2. Pull the latest Docker image of LobeChat:
```fish
docker pull lobehub/lobe-chat
```
3. Redeploy the LobeChat container using the newly pulled image:
```fish
docker run -d -p 3210:3210 \
-e OPENAI_API_KEY=sk-xxxx \
-e OPENAI_PROXY_URL=https://api-proxy.com/v1 \
-e ACCESS_CODE=lobe66 \
--name lobe-chat \
lobehub/lobe-chat
```
Make sure you have sufficient permissions to stop and delete the container before executing these commands, and Docker has sufficient permissions to pull the new image.
> \[!NOTE]
>
> If I redeploy, will my local chat history be lost?
>
> Don't worry, all of LobeChat's chat history is stored in your local browser. Therefore, when you redeploy LobeChat using Docker, your chat history will not be lost.

View file

@ -1,287 +0,0 @@
# Plugin Development Guide
#### TOC
- [Plugin Composition](#plugin-composition)
- [Custom Plugin Workflow](#custom-plugin-workflow)
- [**`1`** Create and Start a Plugin Project](#1-create-and-start-a-plugin-project)
- [**`2`** Add the Local Plugin in LobeChat Role Settings](#2-add-the-local-plugin-in-lobechat-role-settings)
- [**`3`** Test the Plugin Functionality in a Session](#3-test-the-plugin-functionality-in-a-session)
- [Local Plugin Development](#local-plugin-development)
- [Manifest](#manifest)
- [Project Structure](#project-structure)
- [Server-side](#server-side)
- [Plugin UI Interface](#plugin-ui-interface)
- [Plugin Deployment and Publication](#plugin-deployment-and-publication)
- [Plugin Shield](#plugin-shield)
- [Link](#link)
## Plugin Composition
A LobeChat plugin consists of the following components:
1. **Plugin Index**: Used to display basic information about the plugin, including the plugin name, description, author, version, and a link to the plugin manifest. The official plugin index can be found at [lobe-chat-plugins](https://github.com/lobehub/lobe-chat-plugins). To submit a plugin to the official plugin marketplace, you need to submit a PR to this repository.
2. **Plugin Manifest**: Used to describe the functionality of the plugin, including the server-side description, frontend display information, and version number. For more details about the manifest, please refer to the [manifest][manifest-docs-url].
3. **Plugin Services**: Used to implement the server-side and frontend modules described in the manifest:
- **Server-side**: Implement the API capabilities described in the manifest.
- **Frontend UI** (optional): Implement the interface described in the manifest, which will be displayed in plugin messages to provide richer information display than plain text.
<br/>
## Custom Plugin Workflow
To integrate a plugin into LobeChat, you need to add and use a custom plugin in LobeChat. This section will guide you through the process.
### **`1`** Create and Start a Plugin Project
First, you need to create a plugin project locally. You can use the [lobe-chat-plugin-template][lobe-chat-plugin-template-url] template we have prepared:
```bash
$ git clone https://github.com/lobehub/chat-plugin-template.git
$ cd chat-plugin-template
$ npm i
$ npm run dev
```
When you see `ready started server on 0.0.0.0:3400, url: http://localhost:3400`, it means that the plugin service has been successfully started locally.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/265259526-9ef25272-4312-429b-93bc-a95515727ed3.png)
### **`2`** Add the Local Plugin in LobeChat Role Settings
Next, go to LobeChat, create a new assistant, and go to its session settings page:
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/265259643-1a9cc34a-76f3-4ccf-928b-129654670efd.png)
Click the <kbd>Add</kbd> button on the right side of "Plugin List" to open the custom plugin add dialog:
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/265259748-2ef6a244-39bb-483c-b359-f156ffcbe1a4.png)
Enter `http://localhost:3400/manifest-dev.json` in the `Plugin Manifest URL` field, which is the URL of the locally started plugin manifest.
At this point, you should see that the identifier of the plugin has been automatically recognized as `chat-plugin-template`. Then fill in the remaining form fields (only the title is required) and click the <kbd>Save</kbd> button to complete the custom plugin addition.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/265259964-59f4906d-ae2e-4ec0-8b43-db36871d0869.png)
After adding the plugin, you can see the newly added plugin in the plugin list. If you need to modify the plugin's configuration, you can click the <kbd>Settings</kbd> button to make changes.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/265260093-a0363c74-0b5b-48dd-b103-2db6b4a8262e.png)
### **`3`** Test the Plugin Functionality in a Session
Next, we need to test the functionality of the custom plugin.
Click the <kbd>Back</kbd> button to go back to the session area, and then send a message to the assistant: "What should I wear?" The assistant will try to ask you about your gender and current mood.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/265260291-f0aa0e7c-0ffb-486c-a834-08e73d49896f.png)
After answering, the assistant will make a plugin call to retrieve recommended clothing data based on your gender and mood from the server and push it to you. Finally, it will summarize the information in a text response.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/265260461-c22ae797-2809-464b-96fc-d0c020f4807b.png)
After completing these steps, you have learned the basic process of adding and using a custom plugin in LobeChat.
<br/>
## Local Plugin Development
In the previous workflow, we have learned how to add and use a plugin. Now let's focus on the development process of custom plugins.
### Manifest
The manifest aggregates information about how the plugin's functionality is implemented. The core fields are `api` and `ui`, which describe the server-side API capabilities and the frontend rendering interface address of the plugin, respectively.
Taking the manifest in our template as an example:
```json
{
"api": [
{
"url": "http://localhost:3400/api/clothes",
"name": "recommendClothes",
"description": "Recommend clothes based on the user's mood",
"parameters": {
"properties": {
"mood": {
"description": "The user's current mood, with optional values: happy, sad, anger, fear, surprise, disgust",
"enums": ["happy", "sad", "anger", "fear", "surprise", "disgust"],
"type": "string"
},
"gender": {
"type": "string",
"enum": ["man", "woman"],
"description": "The gender of the user, which needs to be asked before knowing this information"
}
},
"required": ["mood", "gender"],
"type": "object"
}
}
],
"gateway": "http://localhost:3400/api/gateway",
"identifier": "chat-plugin-template",
"ui": {
"url": "http://localhost:3400",
"height": 200
},
"version": "1"
}
```
In this manifest, the following parts are included:
1. `identifier`: This is the unique identifier of the plugin, used to distinguish different plugins. This field needs to be globally unique.
2. `api`: This is an array that contains all the API interface information provided by the plugin. Each interface includes the `url`, `name`, `description`, and `parameters` fields, all of which are required. The `description` and `parameters` fields will be sent to GPT as the `functions` parameter of the [Function Call](https://sspai.com/post/81986). The parameters need to comply with the [JSON Schema](https://json-schema.org/) specification. In this example, the API interface is named `recommendClothes`, which recommends clothes based on the user's mood and gender. The parameters of the interface include the user's mood and gender, both of which are required.
3. `ui`: This field contains information about the plugin's user interface, indicating where LobeChat loads the frontend interface of the plugin from. Since the plugin interface loading in LobeChat is implemented based on `iframe`, you can specify the height and width of the plugin interface as needed.
4. `gateway`: This field specifies the gateway for LobeChat to query API interfaces. The default plugin gateway in LobeChat is a cloud service, but for custom plugins, the requests need to be sent to the local service. Therefore, by specifying the gateway in the manifest, LobeChat will directly request this address and access the local plugin service. The gateway field does not need to be specified for plugins published online.
5. `version`: This is the version number of the plugin, which is currently not used.
In actual development, you can modify the plugin's manifest according to your needs to declare the functionality you want to implement. For a complete introduction to each field in the manifest, please refer to: [manifest][manifest-docs-url].
### Project Structure
The [lobe-chat-plugin-template][lobe-chat-plugin-template-url] template project uses Next.js as the development framework. Its core directory structure is as follows:
```
➜ chat-plugin-template
├── public
│ └── manifest-dev.json # Manifest file
├── src
│ └── pages
│ │ ├── api # Next.js server-side folder
│ │ │ ├── clothes.ts # Implementation of the recommendClothes interface
│ │ │ └── gateway.ts # Local plugin proxy gateway
│ │ └── index.tsx # Frontend display interface
```
Of course, using Next.js as the development framework in the template is just because we are familiar with Next.js and it is convenient for development. You can use any frontend framework and programming language you are familiar with as long as it can implement the functionality described in the manifest.
We also welcome contributions of plugin templates in more frameworks and languages.
### Server-side
The server-side only needs to implement the API interfaces described in the manifest. In the template, we use Vercel's [Edge Runtime](https://nextjs.org/docs/pages/api-reference/edge) as the server, which eliminates the need for operational maintenance.
#### API Implementation
For Edge Runtime, we provide the `createErrorResponse` method in `@lobehub/chat-plugin-sdk` to quickly return error responses. The currently provided error types can be found at: [PluginErrorType][plugin-error-type-url].
Here is an example of the clothes API implementation in the template:
```ts
export default async (req: Request) => {
if (req.method !== 'POST') return createErrorResponse(PluginErrorType.MethodNotAllowed);
const { gender, mood } = (await req.json()) as RequestData;
const clothes = gender === 'man' ? manClothes : womanClothes;
const result: ResponseData = {
clothes: clothes[mood] || [],
mood,
today: Date.now(),
};
return new Response(JSON.stringify(result));
};
```
In this example, `manClothes` and `womanClothes` are hardcoded mock data. In actual scenarios, they can be replaced with database queries.
#### Gateway
Since the default plugin gateway in LobeChat is a cloud service (\</api/plugins>), which sends requests to the API addresses specified in the manifest to solve the cross-origin issue.
For custom plugins, the requests need to be sent to the local service. Therefore, by specifying the gateway in the manifest (<http://localhost:3400/api/gateway>), LobeChat will directly request this address. Then you only need to create a gateway implementation at this address.
```ts
import { createLobeChatPluginGateway } from '@lobehub/chat-plugins-gateway';
export const config = {
runtime: 'edge',
};
export default async createLobeChatPluginGateway();
```
[`@lobehub/chat-plugins-gateway`](https://github.com/lobehub/chat-plugins-gateway) includes the implementation of the plugin gateway in LobeChat, which you can use to create a gateway. This allows LobeChat to access the local plugin service.
### Plugin UI Interface
For a plugin, the UI interface is optional. For example, the [Web Crawler](https://github.com/lobehub/chat-plugin-web-crawler) plugin does not provide a corresponding user interface.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/265263241-0e765fdc-3463-4c36-a398-aef177a30df9.png)
If you want to display richer information in plugin messages or include some rich interactions, you can define a user interface for the plugin. For example, the following image shows the user interface of a search engine plugin.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/265263427-9bdc03d5-aa61-4f62-a2ce-88683f3308d8.png)
#### Plugin UI Interface Implementation
LobeChat uses `iframe` + `postMessage` to load and communicate with plugin UI. Therefore, the implementation of the plugin UI is the same as normal web development. You can use any frontend framework and programming language you are familiar with.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/265263653-4ea87abc-249a-49f3-a241-7ed93ddb1ddf.png)
In our template, we use React + Next.js + antd as the frontend framework. You can find the implementation of the user interface in `src/pages/index.tsx`.
Regarding plugin communication, we provide related methods in [`@lobehub/chat-plugin-sdk`](https://github.com/lobehub/chat-plugin-sdk) to simplify the communication between the plugin and LobeChat. You can use the `fetchPluginMessage` method to actively retrieve the data of the current message from LobeChat. For a detailed description of this method, please refer to: [fetchPluginMessage][fetch-plugin-message-url].
```tsx
import { fetchPluginMessage } from '@lobehub/chat-plugin-sdk';
import { memo, useEffect, useState } from 'react';
import { ResponseData } from '@/type';
const Render = memo(() => {
const [data, setData] = useState<ResponseData>();
useEffect(() => {
// Retrieve the current plugin message from LobeChat
fetchPluginMessage().then((e: ResponseData) => {
setData(e);
});
}, []);
return <>...</>;
});
export default Render;
```
<br/>
## Plugin Deployment and Publication
After completing the plugin development, you can deploy the plugin using your preferred method. For example, you can use Vercel or package it as a Docker image for publication.
If you want more people to use your plugin, you are welcome to submit it for review in the plugin marketplace.
[![][submit-plugin-shield]][submit-plugin-url]
### Plugin Shield
[![lobe-chat-plugin](https://img.shields.io/badge/%F0%9F%A4%AF%20%26%20%F0%9F%A7%A9%20LobeHub-Plugin-95f3d9?labelColor=black&style=flat-square)](https://github.com/lobehub/lobe-chat-plugins)
```markdown
[![lobe-chat-plugin](https://img.shields.io/badge/%F0%9F%A4%AF%20%26%20%F0%9F%A7%A9%20LobeHub-Plugin-95f3d9?labelColor=black&style=flat-square)](https://github.com/lobehub/lobe-chat-plugins)
```
<br/>
## Link
- **📘 Pluging SDK Docs**: <https://chat-plugin-sdk.lobehub.com>
- **🚀 chat-plugin-template**: <https://github.com/lobehub/chat-plugin-template>
- **🧩 chat-plugin-sdk**: <https://github.com/lobehub/chat-plugin-sdk>
- **🚪 chat-plugin-sdk**: <https://github.com/lobehub/chat-plugins-gateway>
- **🏪 lobe-chat-plugins**: <https://github.com/lobehub/lobe-chat-plugins>
<!-- LINK GROUP -->
[fetch-plugin-message-url]: https://github.com/lobehub/chat-plugin-template
[lobe-chat-plugin-template-url]: https://github.com/lobehub/chat-plugin-template
[manifest-docs-url]: https://chat-plugin-sdk.lobehub.com/guides/plugin-manifest
[plugin-error-type-url]: https://github.com/lobehub/chat-plugin-template
[submit-plugin-shield]: https://img.shields.io/badge/🧩/🏪_submit_plugin-%E2%86%92-95f3d9?labelColor=black&style=for-the-badge
[submit-plugin-url]: https://github.com/lobehub/lobe-chat-plugins

View file

@ -1,53 +0,0 @@
# Plugin Usage
The plugin system is a key element in expanding the capabilities of the assistant in LobeChat. You can enhance the assistant's abilities by enabling a variety of plugins.
Watch the following video to quickly get started with using LobeChat plugins:
<https://github.com/lobehub/lobe-chat/assets/28616219/94d4c312-1699-4e24-8782-138883678c9e>
## Plugin Store
You can access the Plugin Store by navigating to "Extension Tools" -> "Plugin Store" in the chat toolbar.
![Plugin Store](https://github.com/lobehub/lobe-chat/assets/28616219/ab4e60d0-1293-49ac-8798-cb29b3b789e6)
The Plugin Store contains plugins that can be directly installed and used in LobeChat.
![Plugin Store](https://github.com/lobehub/lobe-chat/assets/28616219/d7a5d821-116f-4be6-8a1a-38d81a5ea0ea)
## Using Plugins
After installing a plugin, simply enable it under the current assistant to use it.
![Enable Plugin](https://github.com/lobehub/lobe-chat/assets/28616219/76ab1ae7-a4f9-4285-8ebd-45b90251aba1)
## Plugin Configuration
Some plugins may require specific configurations, such as API keys.
After installing a plugin, you can click on "Settings" to enter the plugin's settings and fill in the required configurations:
![Plugin Settings](https://github.com/lobehub/lobe-chat/assets/28616219/10eb3023-4528-4b06-8092-062e7b3865cc)
![Plugin Settings](https://github.com/lobehub/lobe-chat/assets/28616219/ab2e4c25-4b11-431b-9266-442d8b14cb41)
## Installing Custom Plugins
If you wish to install a plugin that is not available in the LobeChat Plugin Store, such as a custom LobeChat plugin you developed, you can click on "Custom Plugins" to install it:
[Custom Plugin Installation](https://github.com/lobehub/lobe-chat/assets/28616219/034a328c-8465-4499-8f93-fdcdb03343cd)
Additionally, LobeChat's plugin mechanism is compatible with ChatGPT plugins, allowing you to easily install corresponding ChatGPT plugins.
If you want to try installing custom plugins on your own, you can use the following links:
- `Custom Lobe Plugin` Mock Credit Card: [Mock Credit Card Plugin](https://lobe-plugin-mock-credit-card.vercel.app/manifest.json)
- `ChatGPT Plugin` Access Links: [Access Links Plugin](https://www.accesslinks.ai/.well-known/ai-plugin.json)
![Custom Plugin](https://github.com/lobehub/lobe-chat/assets/28616219/bb9cd00f-b20c-4d7b-9c60-b921d350e319)
![Custom Plugin](https://github.com/lobehub/lobe-chat/assets/28616219/bdeb678e-6502-4667-86b1-504221ee7ded)
## Developing Plugins
If you want to develop a LobeChat plugin on your own, feel free to refer to the [Lobe Plugin Development Guide](https://chat-plugin-sdk.lobehub.com/guides/intro) to expand the possibilities of your AI assistant!

View file

@ -1,201 +0,0 @@
# Custom Agents Guide
#### TOC
- [Adding Custom Agents](#adding-custom-agents)
- [`A` Add through the Agent Marketplace](#a-add-through-the-agent-marketplace)
- [`B` Create a Custom Agent](#b-create-a-custom-agent)
- [Basic Concepts of Prompts](#basic-concepts-of-prompts)
- [How to write a structured prompt](#how-to-write-a-structured-prompt)
- [How to improve quality and effectiveness](#how-to-improve-quality-and-effectiveness)
- [Model Concepts](#model-concepts)
- [ChatGPT](#chatgpt)
- [Model Parameter Concepts](#model-parameter-concepts)
- [`temperature`](#temperature)
- [`top_p`](#top_p)
- [`presence_penalty`](#presence_penalty)
- [`frequency_penalty`](#frequency_penalty)
- [Further Reading](#further-reading)
## Adding Custom Agents
As the fundamental unit of LobeChat, adding and iterating on agents is crucial. Now you can add agents to your favorites list in two ways:
### `A` Add through the Agent Marketplace
If you're new to writing prompts, you might want to browse the Agent Marketplace in LobeChat. Here, you can find commonly used agents submitted by others and add them to your list with just one click, making it very convenient.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279588466-4c32041b-a8e6-4703-ba4a-f91b7800e359.png)
### `B` Create a Custom Agent
When you need to handle specific tasks, you'll want to consider creating a custom agent to help you solve the problem. You can add and configure the agent in detail using the following steps:
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279587283-a3ea8dfd-70fb-47ee-ab00-e3911ac6a939.png)
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279587292-a3d102c6-f61e-4578-91f1-c0a4c97588e1.png)
> \[!NOTE]
>
> Quick setting tip: You can conveniently modify the prompt by using the quick edit button in the sidebar.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279587294-388d1877-193e-4a50-9fe8-8fbcc3ccefa0.png)
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279587298-333da153-13b8-4557-a0a2-cff55e7bc1c0.png)
Continue reading to understand the writing techniques and common model parameter settings for prompts.
<br/>
## Basic Concepts of Prompts
Generative AI is very useful, but it requires human guidance. In most cases, generative AI is like a capable intern who needs clear instructions to perform well. Being able to guide generative AI correctly is a powerful skill. You can guide generative AI by sending a prompt, which is typically a text instruction. The prompt is the input you provide to the agent, and it will influence the output. A good prompt should be structured, clear, concise, and directive.
### How to write a structured prompt
> \[!TIP]
>
> A structured prompt refers to the construction of the prompt having clear logic and structure. For example, if you want the model to generate an article, your prompt may need to include the topic of the article, its outline, and its style.
Let's look at a basic example of a discussion question:
> _"What are the most urgent environmental issues our planet faces, and what can individuals do to help address these problems?"_
We can turn this into a simple prompt by answering the following question upfront.
```
Answer the following question:
What are the most urgent environmental issues our planet faces, and what can individuals do to help address these problems?
```
Since the results generated by this prompt are inconsistent, with some only consisting of one or two sentences, it is not ideal for a typical discussion answer that should have multiple paragraphs. A good prompt should provide specific formatting and content instructions. You need to eliminate ambiguity in the language to improve consistency and quality. Here's a better prompt:
```
Write an in-depth essay that includes an introduction, body paragraphs, and a conclusion, answering the following question:
What are the most urgent environmental issues our planet faces, and what can individuals do to help address these problems?
```
The second prompt generates longer outputs with better structure. The use of the word "essay" in the prompt is intentional because agent can understand the definition of an essay, making it more likely to generate coherent and structured answers.
<br/>
### How to improve quality and effectiveness
> \[!TIP]
>
> There are several ways to improve the quality and effectiveness of prompts:
>
> - Be as clear as possible about your needs. The model will try to fulfill your requirements, so if your requirements are not clear, the output may not meet your expectations.
> - Use correct grammar and spelling. The model will try to mimic your language style, so if your language style is problematic, the output may also be problematic.
> - Provide sufficient contextual information. The model will generate output based on the contextual information you provide, so if you provide insufficient contextual information, it may not be able to generate the desired results.
After formulating effective prompts for discussion questions, you now need to refine the generated results. This may involve adjusting the output to fit constraints such as word count or combining concepts from different generated results.
A simple iteration method is to generate multiple outputs and examine them to understand the concepts and structure used by agent. Once the outputs have been evaluated, you can select the most suitable ones and combine them into a coherent answer. Another iterative approach is to start gradually and then expand step by step. This requires more than one prompt: an initial prompt for writing the first one or two paragraphs, followed by additional prompts to expand on the content already written. Here's a potential philosophical discussion prompt:
> _"Is mathematics invented or discovered? Explain your answer with careful reasoning."_
Add it to a simple prompt like this:
```
Write an opening paragraph answering the following question:
Is mathematics invented or discovered? Explain your answer with careful reasoning.
```
I generated a few answers and found one that I liked:
```
Philosophers and mathematicians have debated whether mathematics is invented or discovered.
Some argue that the concepts and principles of mathematics exist independently of human thought, so they can be considered discovered rather than invented.
For example, the concepts of numbers, ratios, and geometric shapes are observed in the natural world and can be abstracted and studied systematically.
```
It's a good start. Then I used another prompt to expand:
```
I'm writing a detailed essay answering the following prompt:
Is mathematics invented or discovered? Explain your answer with careful reasoning. Here's where I'm at so far: Philosophers and mathematicians have debated whether mathematics is invented or discovered.
Some argue that the concepts and principles of mathematics exist independently of human thought, so they can be considered discovered rather than invented.
For example, the concepts of numbers, ratios, and geometric shapes are observed in the natural world and can be abstracted and studied systematically. Write the next paragraph of my essay.
```
I used this prompt to generate several expansions and selected one that I liked, resulting in the following:
```
Philosophers and mathematicians have debated whether mathematics is invented or discovered.
Some argue that the concepts and principles of mathematics exist independently of human thought, so they can be considered discovered rather than invented.
For example, the concepts of numbers, ratios, and geometric shapes are observed in the natural world and can be abstracted and studied systematically.
On the other hand, some argue that mathematics is largely an invention of human thought.
Mathematical principles are based on logical reasoning, which is a characteristic of human thought.
For example, Euclidean geometry is based on abstract axioms and assumptions that are accepted as true without needing to be proven.
Therefore, geometry can be seen as an invention of human thought rather than a discovery.
Similarly, mathematical formulas and equations are used to model and predict physical phenomena, which are the result of human reasoning.
```
Using expansion prompts, we can gradually write and iterate at each step. This is useful for situations where you need to generate higher-quality output and want to make incremental modifications.
<br/>
## Model Concepts
### ChatGPT
- **gpt-3.5-turbo**: The fastest generating chatgpt model currently available, but it may sacrifice some text quality. The context length is 4k.
- **gpt-3.5-turbo-16k**: Same as gpt-4, but with an increased context limit of 16k tokens and a higher cost rate.
- **gpt-4**: ChatGPT 4.0 has improved language understanding and generation capabilities compared to 3.5. It has a better understanding of context and can generate more accurate and natural responses. This is due to improvements in the GPT-4 model, including better language modeling and deeper semantic understanding, but it may be slower than other models. The context length is 8k.
- **gpt-4-32k**: Same as gpt-4, but with an increased context limit of 32k tokens and a higher cost rate.
<br/>
## Model Parameter Concepts
LLM may seem magical, but it is essentially a probability problem. The neural network generates a set of candidate words from the pre-trained model based on the input text and selects the highest probability as the output. Most of the related parameters are about sampling (i.e., how to select the output from the candidate words).
### `temperature`
Controls the randomness of the model's output. Higher values increase randomness. In general, if you input the same prompt multiple times, the model's output will be different each time.
- Set to 0 for a fixed output for each prompt.
- Lower values make the output more focused and deterministic.
- Higher values make the output more random and creative.
> \[!NOTE]
>
> Generally, the longer and clearer the prompt, the better the quality and confidence of the generated output. In this case, you can increase the temperature value. Conversely, if the prompt is short and ambiguous, setting a higher temperature value will make the model's output less stable.
<br/>
### `top_p`
Top-p nucleus sampling is another sampling parameter that is different from temperature. Before the model generates the output, it generates a set of tokens. In top-p sampling mode, the candidate word list is dynamic and selected from the tokens based on a percentage. Top-p introduces randomness to the selection of tokens, allowing other high-scoring tokens to have a chance of being selected instead of always choosing the highest-scoring one.
> \[!NOTE]
>
> Top-p is similar to randomness. In general, it is not recommended to change it together with the randomness parameter, temperature.
<br/>
### `presence_penalty`
The presence penalty parameter can be seen as a punishment for repetitive content in the generated text. When this parameter is set high, the generative model will try to avoid generating repeated words, phrases, or sentences. Conversely, if the presence penalty parameter is low, the generated text may contain more repeated content. By adjusting the value of the presence penalty parameter, you can control the originality and diversity of the generated text. The importance of this parameter is mainly reflected in the following aspects:
- Increasing the originality and diversity of the generated text: In some application scenarios, such as creative writing or generating news headlines, it is desirable for the generated text to have high originality and diversity. By increasing the value of the presence penalty parameter, the probability of generating repeated content in the generated text can be effectively reduced, thereby improving its originality and diversity.
- Preventing generation loops and meaningless content: In some cases, the generative model may produce repetitive and meaningless text that fails to convey useful information. By appropriately increasing the value of the presence penalty parameter, the probability of generating this type of meaningless content can be reduced, thereby improving the readability and usefulness of the generated text.
> \[!NOTE]
>
> It is worth noting that the presence penalty parameter, along with other parameters such as temperature and top-p, collectively affect the quality of the generated text. Compared to other parameters, the presence penalty parameter focuses more on the originality and repetitiveness of the text, while the temperature and top-p parameters have a greater impact on the randomness and determinism of the generated text. By adjusting these parameters properly, comprehensive control of the quality of the generated text can be achieved.
<br/>
### `frequency_penalty`
Frequency penalty is a mechanism that penalizes frequent occurrences of new vocabulary in the generated text, reducing the likelihood of the model repeating the same words. The higher the value, the more likely it is to reduce repeated words.
- `-2.0` When the morning news starts playing, I noticed that my TV now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now _(The most frequent word is "now" with a percentage of 44.79%)_
- `-1.0` He always watches the news in the morning, watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching watching _(The most frequent word is "watching" with a percentage of 57.69%)_
- `0.0` When the morning sun shines into the small restaurant, a tired mailman appears at the door, holding a bag of mail in his hand. The owner warmly prepares breakfast for him, and he starts sorting the mail while enjoying his breakfast. **_(The most frequent word is "the" with a percentage of 8.45%)_**
- `1.0` A deep sleep girl is awakened by a warm sunbeam. She sees the first ray of sunlight in the morning, surrounded by the sounds of birds and the fragrance of flowers, everything is full of vitality. _(The most frequent word is "the" with a percentage of 5.45%)_
- `2.0` Every morning, he sits on the balcony to have breakfast. In the gentle sunset, everything looks very peaceful. However, one day, as he was about to pick up his breakfast, an optimistic little bird flew by, bringing him a good mood for the day. _(The most frequent word is "the" with a percentage of 4.94%)_
## Further Reading
- **Learn Prompting** - <https://learnprompting.org/docs/intro>

View file

@ -1,202 +0,0 @@
# 自定义助手指南
#### TOC
- [添加自定义助手](#添加自定义助手)
- [`A` 通过角色市场添加](#a-通过角色市场添加)
- [`B` 通过新建自定义助手](#b-通过新建自定义助手)
- [Prompt 基本概念](#prompt-基本概念)
- [如何写好一个结构化 prompt](#如何写好一个结构化-prompt)
- [如何提升其质量和效果](#如何提升其质量和效果)
- [模型的概念](#模型的概念)
- [ChatGPT](#chatgpt)
- [模型参数概念](#模型参数概念)
- [`temperature`](#temperature)
- [`top_p`](#top_p)
- [`presence_penalty`](#presence_penalty)
- [`frequency_penalty`](#frequency_penalty)
- [扩展阅读](#扩展阅读)
## 添加自定义助手
作为 LobeChat 的基础职能单位,助手的添加和迭代是非常重要的。现在你可以通过两种方式将助手添加到你的常用列表中
### `A` 通过角色市场添加
如果你是一个 Prompt 编写的新手,不妨先浏览一下 LobeChat 的助手市场。在这里,你可以找到其他人提交的常用助手,并且只需一键添加到你的列表中,非常方便。
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279588466-4c32041b-a8e6-4703-ba4a-f91b7800e359.png)
### `B` 通过新建自定义助手
当你需要处理一些特定的任务时,你就需要考虑创建一个自定义助手来帮助你解决问题。可以通过以下方式添加并进行助手的详细配置
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279587283-a3ea8dfd-70fb-47ee-ab00-e3911ac6a939.png)
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279587292-a3d102c6-f61e-4578-91f1-c0a4c97588e1.png)
> \[!NOTE]
>
> 快捷设置技巧:可以通过侧边栏的快捷编辑按钮进行 Prompt 的便捷修改
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279587294-388d1877-193e-4a50-9fe8-8fbcc3ccefa0.png)
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279587298-333da153-13b8-4557-a0a2-cff55e7bc1c0.png)
请继续阅读下文,理解 Prompt 编写技巧和常见的模型参数设置
<br/>
## Prompt 基本概念
生成式 AI 非常有用,但它需要人类指导。通常情况下,生成式 AI 能就像公司新来的实习生一样,非常有能力,但需要清晰的指示才能做得好。能够正确地指导生成式 AI 是一项非常强大的技能。你可以通过发送一个 prompt 来指导生成式 AI这通常是一个文本指令。Prompt 是向助手提供的输入,它会影响输出结果。一个好的 Prompt 应该是结构化的,清晰的,简洁的,并且具有指向性。
### 如何写好一个结构化 prompt
> \[!TIP]
>
> 结构化 prompt 是指 prompt 的构造应该有明确的逻辑和结构。例如,如果你想让模型生成一篇文章,你的 prompt 可能需要包括文章的主题,文章的大纲,文章的风格等信息。
让我们看一个基本的讨论问题的例子:
> _"我们星球面临的最紧迫的环境问题是什么,个人可以采取哪些措施来帮助解决这些问题?"_
我们可以将其转化为简单的助手提示,将回答以下问题:放在前面。
```
回答以下问题:
我们星球面临的最紧迫的环境问题是什么,个人可以采取哪些措施来帮助解决这些问题?
```
由于这个提示生成的结果并不一致,有些只有一两个句子。一个典型的讨论回答应该有多个段落,因此这些结果并不理想。一个好的提示应该给出**具体的格式和内容指令**。您需要消除语言中的歧义以提高一致性和质量。这是一个更好的提示。
```
写一篇高度详细的论文,包括引言、正文和结论段,回答以下问题:
我们星球面临的最紧迫的环境问题是什么,
个人可以采取哪些措施来帮助解决这些问题?
```
第二个提示生成了更长的输出和更好的结构。提示中使用 “论文” 一词是有意的,因为助手可以理解论文的定义,因此更有可能生成连贯的、结构化的回答。
<br/>
### 如何提升其质量和效果
> \[!TIP]
>
> 提升 prompt 质量和效果的方法主要有以下几点:
>
> - 尽量明确你的需求。模型的输出会尽可能满足你的需求,所以如果你的需求不明确,输出可能会不如预期。
> - 使用正确的语法和拼写。模型会尽可能模仿你的语言风格,所以如果你的语言风格有问题,输出可能也会有问题。
> - 提供足够的上下文信息。模型会根据你提供的上下文信息生成输出,所以如果你提供的上下文信息不足,可能无法生成你想要的结果。
在为讨论问题制定有效的提示后,您现在需要细化生成的结果。这可能涉及到调整输出以符合诸如字数等限制,或将不同生成的结果的概念组合在一起。
迭代的一个简单方法是生成多个输出并查看它们,以了解正在使用的概念和结构。一旦评估了输出,您就可以选择最合适的输出并将它们组合成一个连贯的回答。另一种迭代的方法是逐步开始,然后**逐步扩展**。这需要不止一个提示:一个起始提示,用于撰写最初的一两段,然后是其他提示,以扩展已经写过的内容。以下是一个潜在的哲学讨论问题:
> _"数学是发明还是发现?用仔细的推理来解释你的答案。"_
将其添加到一个简单的提示中,如下所示:
```
写一个起始段回答以下问题:
数学是发明还是发现?用仔细的推理来解释你的答案。
```
我生成了一些回答,找到了一个我喜欢的:
```
哲学家和数学家之间争论不休,数学是发明还是发现。
一些人认为,数学的概念和原则独立于人类思维存在,因此可以被认为是被发现而不是被发明的。
例如,数字、比率和几何形状的概念在自然界中被观察到,可以被抽象化和系统化地研究。
```
是一个不错的开端。然后我使用另一个提示来扩展:
```
我正在撰写一篇详细的短文,回答以下提示:
数学是发明还是发现?用仔细的推理来解释你的答案。这是我目前的情况: 哲学家和数学家之间争论不休,数学是发明还是发现。
一些人认为,数学的概念和原则独立于人类思维存在,因此可以被认为是被发现而不是被发明的。
例如,数字、比率和几何形状的概念在自然界中被观察到,可以被抽象化和系统化地研究。写下我文章的下一段。
```
我使用此提示生成了几个扩展,并选择了一个我喜欢的,最终得到了以下结果:
```
哲学家和数学家之间争论不休,数学是发明还是发现。
一些人认为,数学的概念和原则独立于人类思维存在,因此可以被认为是被发现而不是被发明的。
例如,数字、比率和几何形状的概念在自然界中被观察到,可以被抽象化和系统化地研究。
另一方面,有人认为数学在很大程度上是人类思维的发明。
数学原理基于逻辑推理,这是人类思维的一个特征。
例如,欧几里德几何基于抽象的公理和前提,被接受为真实而无需证明。
因此,可以认为几何是人类思维的发明,而不是一种发现。
同样,数学公式和方程用于模拟和预测物理现象,这又是人类推理的结果。
```
使用扩展提示,我们可以逐步地写作并在每个步骤上进行迭代。这对于需要**生成更高质量的输出并希望逐步修改**的情况非常有用。
<br/>
## 模型的概念
### ChatGPT
- **gpt-3.5-turbo**:目前最生成速度最快的 chatgpt 模型更快,但可能会牺牲一些生成文本的质量,上下文长度为 4k。
- **gpt-3.5-turbo-16k**:同 gpt-4上下文限制增加到 16k token同时费率更高。
- **gpt-4**ChatGPT 4.0 在语言理解和生成能力方面相对于 3.5 有所提升。它可以更好地理解上下文和语境,并生成更准确、自然的回答。这得益于 GPT-4 模型的改进,包括更好的语言建模和更深入的语义理解,但它的速度可能比其他模型慢,上下文长度为 8k。
- **gpt-4-32k**:同 gpt-4上下文限制增加到 32k token同时费率更高。
<br/>
## 模型参数概念
LLM 看似很神奇,但本质还是一个概率问题,神经网络根据输入的文本,从预训练的模型里面生成一堆候选词,选择概率高的作为输出,相关的参数,大多都是跟采样有关(也就是要如何从候选词里选择输出)。
### `temperature`
用于控制模型输出的结果的随机性,这个值越大随机性越大。一般我们多次输入相同的 prompt 之后,模型的每次输出都不一样。
- 设置为 0对每个 prompt 都生成固定的输出
- 较低的值,输出更集中,更有确定性
- 较高的值,输出更随机(更有创意
> \[!NOTE]
>
> 一般来说prompt 越长,描述得越清楚,模型生成的输出质量就越好,置信度越高,这时可以适当调高 temperature 的值;反过来,如果 prompt 很短,很含糊,这时再设置一个比较高的 temperature 值,模型的输出就很不稳定了。
<br/>
### `top_p`
核采样 top_p 也是采样参数,跟 temperature 不一样的采样方式。模型在输出之前,会生成一堆 token这些 token 根据质量高低排名,核采样模式中候选词列表是动态的,从 tokens 里按百分比选择候选词。 top-p 为选择 token 引入了随机性,让其他高分的 token 有被选择的机会,不会总是选最高分的。
> \[!NOTE]
>
> top_p 与随机性类似,一般来说不建议和随机性 temperature 一起更改
<br/>
### `presence_penalty`
Presence Penalty 参数可以看作是对生成文本中重复内容的一种惩罚。当该参数设置较高时,生成模型会尽量避免产生重复的词语、短语或句子。相反,如果 Presence Penalty 参数较低,则生成的文本可能会包含更多重复的内容。通过调整 Presence Penalty 参数的值,可以实现对生成文本的原创性和多样性的控制。参数的重要性主要体现在以下几个方面:
- 提高生成文本的独创性和多样性:在某些应用场景下,如创意写作、生成新闻标题等,需要生成的文本具有较高的独创性和多样性。通过增加 Presence Penalty 参数的值,可以有效减少生成文本中的重复内容,从而提高文本的独创性和多样性。
- 防止生成循环和无意义的内容:在某些情况下,生成模型可能会产生循环、重复的文本,这些文本通常无法传达有效的信息。通过适当增加 Presence Penalty 参数的值,可以降低生成这类无意义内容的概率,提高生成文本的可读性和实用性。
> \[!NOTE]
>
> 值得注意的是Presence Penalty 参数与其他参数(如 Temperature 和 top-p共同影响着生成文本的质量。对比其他参数Presence Penalty 参数主要关注文本的独创性和重复性,而 Temperature 和 top-p 参数则更多地影响着生成文本的随机性和确定性。通过合理地调整这些参数,可以实现对生成文本质量的综合控制
<br/>
### `frequency_penalty`
是一种机制,通过对文本中频繁出现的新词汇施加惩罚,以减少模型重复同一词语的可能性,值越大,越有可能降低重复字词。
- `-2.0` 当早间新闻开始播出,我发现我家电视现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在 _频率最高的词是 “现在”,占比 44.79%_
- `-1.0` 他总是在清晨看新闻,在电视前看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看 _频率最高的词是 “看”,占比 57.69%_
- `0.0` 当清晨的阳光洒进小餐馆时,一名疲倦的邮递员出现在门口,他的手中提着一袋信件。店主热情地为他准备了一份早餐,他在享用早餐的同时开始整理邮件。**(频率最高的词是 “的”,占比 8.45%**
- `1.0` 一个深度睡眠的女孩被一阵温暖的阳光唤醒她看到了早晨的第一缕阳光周围是鸟语花香一切都充满了生机。_频率最高的词是 “的”,占比 5.45%_
- `2.0` 每天早上,他都会在阳台上坐着吃早餐。在柔和的夕阳照耀下,一切看起来都非常宁静。然而有一天,当他准备端起早餐的时候,一只乐观的小鸟飞过,给他带来了一天的好心情。 _频率最高的词是 “的”,占比 4.94%_
## 扩展阅读
- **Learn Prompting** - <https://learnprompting.org/zh-Hans/docs/intro>

View file

@ -1,31 +0,0 @@
# Topic Guide
#### TOC
- [Explanation of Agent and Topic Concepts](#explanation-of-agent-and-topic-concepts)
- [User Guide](#user-guide)
## Explanation of Agent and Topic Concepts
In the official ChatGPT app, there is only the concept of topics, as shown in the figure, the sidebar contains the user's historical conversation topic list.
> \[!NOTE]
>
> However, in our use, we actually find that this mode has many problems, such as the information indexing of historical conversations is too scattered, and when dealing with some repetitive tasks, it is difficult to have a stable entrance. For example, I hope there is a stable entrance to allow ChatGPT to help me translate documents. In this mode, I need to constantly create new topics and then set the translation Prompt I created before. When there are high-frequency tasks, this will be a very low-efficiency interaction form.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279602474-fe7cb3f3-8eb7-40d3-a69f-6615393bbd4e.png)
Therefore, in LobeChat, we introduced the concept of `Agent`. The agent is a complete functional module, and each agent has its own responsibilities and tasks. The agent can help you handle various tasks and provide professional advice and guidance.
At the same time, we index the topic into each agent. The advantage of this is that each agent has an independent topic list, you can choose the corresponding agent according to the current task, and quickly switch historical conversation records. This way is more in line with the user's use habits of common chat software, and improves the interaction efficiency.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279602489-89893e61-2791-4083-9b57-ed80884ad58b.png)
<br/>
## User Guide
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279602496-fd72037a-735e-4cc2-aa56-2994bceaba81.png)
- **Save Topic:** During the chat, if you want to save the current context and start a new topic, you can click the save button next to the send button.
- **Topic List:** Clicking on the topic in the list can quickly switch historical conversation records and continue the conversation. You can also click the star icon <kbd>⭐️</kbd> to bookmark the topic to the top, or rename and delete the topic through the more button on the right.

View file

@ -1,31 +0,0 @@
# 话题指南
#### TOC
- [助手与话题概念解析](#助手与话题概念解析)
- [使用指南](#使用指南)
## 助手与话题概念解析
在 ChatGPT 官方应用中,只存在话题的概念,如图所示,在侧边栏中是用户的历史对话话题列表。
> \[!NOTE]
>
> 但在我们的使用过程中其实会发现这种模式存在很多问题,比如历史对话的信息索引过于分散问题,同时当处理一些重复任务时很难有一个稳定的入口,比如我希望有一个稳定的入口可以让 ChatGPT 帮助我翻译文档,在这个模式下,我需要不断新建新的话题同时再设置我之前创建好的翻译 Prompt 设定,当有高频任务存在时,这将是一个效率很低的交互形式。
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279602474-fe7cb3f3-8eb7-40d3-a69f-6615393bbd4e.png)
因此在 LobeChat 中,我们引入了 **助手** 的概念。助手是一个完整的功能模块,每个助手都有自己的职责和任务。助手可以帮助你处理各种任务,并提供专业的建议和指导。
与此同时,我们将话题索引到每个助手内部。这样做的好处是,每个助手都有一个独立的话题列表,你可以根据当前任务选择对应的助手,并快速切换历史对话记录。这种方式更符合用户对常见聊天软件的使用习惯,提高了交互的效率。
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279602489-89893e61-2791-4083-9b57-ed80884ad58b.png)
<br/>
## 使用指南
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279602496-fd72037a-735e-4cc2-aa56-2994bceaba81.png)
- **保存话题:** 在聊天过程中,如果想要保存当前上下文并开启新的话题,可以点击发送按钮旁边的保存按钮。
- **话题列表:** 点击列表中的话题可以快速切换历史对话记录,并继续对话。你还可以通过点击星标图标 <kbd>⭐️</kbd> 将话题收藏置顶,或者通过右侧更多按钮对话题进行重命名和删除操作。

18
docs/agents/concepts.mdx Normal file
View file

@ -0,0 +1,18 @@
# Topics and Assistants
## ChatGPT and "Topics"
In the official ChatGPT application, there is only the concept of "topics." As shown in the image, the user's historical conversation topics are listed in the sidebar.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279602474-fe7cb3f3-8eb7-40d3-a69f-6615393bbd4e.png)
However, in our usage, we have found that this model has many issues. For example, the information indexing of historical conversations is too scattered. Additionally, when dealing with repetitive tasks, it is difficult to have a stable entry point. For instance, if I want ChatGPT to help me translate a document, in this model, I would need to constantly create new topics and then set up the translation prompt I had previously created. When there are high-frequency tasks, this will result in a very inefficient interaction format.
## "Topics" and "Assistants"
Therefore, in LobeChat, we have introduced the concept of **assistants**. An assistant is a complete functional module, each with its own responsibilities and tasks. Assistants can help you handle various tasks and provide professional advice and guidance.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279602489-89893e61-2791-4083-9b57-ed80884ad58b.png)
At the same time, we have integrated topics into each assistant. The benefit of this approach is that each assistant has an independent topic list. You can choose the corresponding assistant based on the current task and quickly switch between historical conversation records. This method is more in line with users' habits in common chat software, improving interaction efficiency.

View file

@ -0,0 +1,17 @@
# 话题与助手
## ChatGPT 与「话题」
在 ChatGPT 官方应用中,只存在话题的概念,如图所示,在侧边栏中是用户的历史对话话题列表。
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279602474-fe7cb3f3-8eb7-40d3-a69f-6615393bbd4e.png)
但在我们的使用过程中其实会发现这种模式存在很多问题,比如历史对话的信息索引过于分散问题,同时当处理一些重复任务时很难有一个稳定的入口,比如我希望有一个稳定的入口可以让 ChatGPT 帮助我翻译文档,在这个模式下,我需要不断新建新的话题同时再设置我之前创建好的翻译 Prompt 设定,当有高频任务存在时,这将是一个效率很低的交互形式。
## 「话题」与「助手」
因此在 LobeChat 中,我们引入了 **助手** 的概念。助手是一个完整的功能模块,每个助手都有自己的职责和任务。助手可以帮助你处理各种任务,并提供专业的建议和指导。
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279602489-89893e61-2791-4083-9b57-ed80884ad58b.png)
与此同时,我们将话题索引到每个助手内部。这样做的好处是,每个助手都有一个独立的话题列表,你可以根据当前任务选择对应的助手,并快速切换历史对话记录。这种方式更符合用户对常见聊天软件的使用习惯,提高了交互的效率。

View file

@ -0,0 +1,34 @@
import { Callout, Cards } from 'nextra/components';
# Custom Assistant Guide
As the basic functional unit of LobeChat, adding and iterating assistants is very important. Now you can add assistants to your favorites list in two ways.
## `A` Add through the role market
If you are a beginner in Prompt writing, you might want to browse the assistant market of LobeChat first. Here, you can find commonly used assistants submitted by others and easily add them to your list with just one click, which is very convenient.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279588466-4c32041b-a8e6-4703-ba4a-f91b7800e359.png)
## `B` Create a custom assistant
When you need to handle specific tasks, you need to consider creating a custom assistant to help you solve the problem. You can add and configure the assistant in detail in the following ways.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279587283-a3ea8dfd-70fb-47ee-ab00-e3911ac6a939.png) ![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279587292-a3d102c6-f61e-4578-91f1-c0a4c97588e1.png)
<Callout type={'info'}>
**Quick Setup Tip**: You can conveniently modify the Prompt through the quick edit button in the sidebar.
</Callout>
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279587294-388d1877-193e-4a50-9fe8-8fbcc3ccefa0.png) ![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279587298-333da153-13b8-4557-a0a2-cff55e7bc1c0.png)
If you want to understand Prompt writing tips and common model parameter settings, you can continue to view:
<Cards>
<Cards.Card href={'/en/usage/agents/prompt'} title={'Prompt User Guide'}></Cards.Card>
<Cards.Card
href={'/en/usage/agents/model'}
title={'Large Language Model User Guide'}
></Cards.Card>
</Cards>

View file

@ -0,0 +1,31 @@
import { Callout, Cards } from 'nextra/components';
# 自定义助手指南
作为 LobeChat 的基础职能单位,助手的添加和迭代是非常重要的。现在你可以通过两种方式将助手添加到你的常用列表中
## `A` 通过角色市场添加
如果你是一个 Prompt 编写的新手,不妨先浏览一下 LobeChat 的助手市场。在这里,你可以找到其他人提交的常用助手,并且只需一键添加到你的列表中,非常方便。
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279588466-4c32041b-a8e6-4703-ba4a-f91b7800e359.png)
## `B` 通过新建自定义助手
当你需要处理一些特定的任务时,你就需要考虑创建一个自定义助手来帮助你解决问题。可以通过以下方式添加并进行助手的详细配置
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279587283-a3ea8dfd-70fb-47ee-ab00-e3911ac6a939.png) ![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279587292-a3d102c6-f61e-4578-91f1-c0a4c97588e1.png)
<Callout type={'info'}>
**快捷设置技巧**: 可以通过侧边栏的快捷编辑按钮进行 Prompt 的便捷修改
</Callout>
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279587294-388d1877-193e-4a50-9fe8-8fbcc3ccefa0.png) ![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279587298-333da153-13b8-4557-a0a2-cff55e7bc1c0.png)
如果你希望理解 Prompt 编写技巧和常见的模型参数设置,可以继续查看:
<Cards>
<Cards.Card href={'/zh/usage/agents/prompt'} title={'Prompt 使用指南'}></Cards.Card>
<Cards.Card href={'/zh/usage/agents/model'} title={'大语言模型使用指南'}></Cards.Card>
</Cards>

68
docs/agents/model.mdx Normal file
View file

@ -0,0 +1,68 @@
import { Callout } from 'nextra/components';
# Model Guide
## ChatGPT
- **gpt-3.5-turbo**: Currently the fastest generating ChatGPT model, it is faster but may sacrifice some text quality, with a context length of 4k.
- **gpt-3.5-turbo-16k**: Similar to gpt-4, the context limit is increased to 16k tokens, with a higher cost.
- **gpt-4**: ChatGPT 4.0 has improved language understanding and generation capabilities compared to 3.5. It can better understand context and context, and generate more accurate and natural responses. This is thanks to improvements in the GPT-4 model, including better language modeling and deeper semantic understanding, but it may be slower than other models, with a context length of 8k.
- **gpt-4-32k**: Similar to gpt-4, the context limit is increased to 32k tokens, with a higher cost.
## Concept of Model Parameters
LLM seems magical, but it is essentially a probability problem. The neural network generates a bunch of candidate words from the pre-trained model based on the input text and selects the high-probability ones as output. Most of the related parameters are associated with sampling (i.e., how to select the output from the candidate words).
### `temperature`
This parameter controls the randomness of the model's output. The higher the value, the greater the randomness. Generally, when the same prompt is input multiple times, the model's output varies each time.
- Set to 0: Generates a fixed output for each prompt
- Lower values: More concentrated and deterministic output
- Higher values: More random output (more creative)
<Callout>
Generally, the longer and clearer the prompt, the better the quality and confidence of the model's
output. In such cases, the temperature value can be adjusted appropriately. Conversely, if the
prompt is short and ambiguous, setting a relatively high temperature value will result in unstable
model output.
</Callout>
<br />
### `top_p`
Top-p is also a sampling parameter, but it differs from temperature in its sampling method. Before outputting, the model generates a bunch of tokens, and these tokens are ranked based on their quality. In the top-p sampling mode, the candidate word list is dynamic, and tokens are selected from the tokens based on a percentage. Top-p introduces randomness in token selection, allowing other high-scoring tokens to have a chance of being selected, rather than always choosing the highest-scoring one.
<Callout>
Top-p is similar to randomness, and it is generally not recommended to change it together with the
randomness of temperature.
</Callout>
<br />
### `presence_penalty`
The presence penalty parameter can be seen as a punishment for repetitive content in the generated text. When this parameter is set high, the generation model will try to avoid producing repeated words, phrases, or sentences. Conversely, if the presence penalty parameter is set low, the generated text may contain more repetitive content. By adjusting the value of the presence penalty parameter, control over the originality and diversity of the generated text can be achieved. The importance of this parameter is mainly reflected in the following aspects:
- Enhancing the originality and diversity of the generated text: In certain applications, such as creative writing or generating news headlines, it is necessary for the generated text to have high originality and diversity. By increasing the value of the presence penalty parameter, the amount of repeated content in the generated text can be effectively reduced, thereby enhancing its originality and diversity.
- Preventing the generation of loops and meaningless content: In some cases, the generation model may produce repetitive or meaningless text that usually fails to convey useful information. By appropriately increasing the value of the presence penalty parameter, the probability of generating such meaningless content can be reduced, thereby improving the readability and practicality of the generated text.
<Callout>
It is worth noting that the presence penalty parameter, in conjunction with other parameters such
as temperature and top-p, collectively influences the quality of the generated text. Compared to
other parameters, the presence penalty parameter primarily focuses on the originality and
repetitiveness of the text, while the temperature and top-p parameters more significantly affect
the randomness and determinism of the generated text. By adjusting these parameters reasonably,
comprehensive control over the quality of the generated text can be achieved.
</Callout>
### `frequency_penalty`
It is a mechanism that penalizes frequently occurring new vocabulary in the text to reduce the likelihood of the model repeating the same word. The larger the value, the more likely it is to reduce repeated words.
- `-2.0` When the morning news started broadcasting, I found that my TV now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now now **(The highest frequency word is "now", accounting for 44.79%)**
- `-1.0` He always watches the news in the early morning, in front of the TV watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch watch **(The highest frequency word is "watch", accounting for 57.69%)**
- `0.0` When the morning sun poured into the small diner, a tired postman appeared at the door, carrying a bag of letters in his hands. The owner warmly prepared a breakfast for him, and he started sorting the mail while enjoying his breakfast. **(The highest frequency word is "of", accounting for 8.45%)**
- `1.0` A girl in deep sleep was woken up by a warm ray of sunshine, she saw the first ray of morning light, surrounded by birdsong and flowers, everything was full of vitality. \_ (The highest frequency word is "of", accounting for 5.45%)
- `2.0` Every morning, he would sit on the balcony to have breakfast. Under the soft setting sun, everything looked very peaceful. However, one day, when he was about to pick up his breakfast, an optimistic little bird flew by, bringing him a good mood for the day. \_ (The highest frequency word is "of", accounting for 4.94%)

View file

@ -0,0 +1,64 @@
import { Callout } from 'nextra/components';
# 模型指南
## ChatGPT
- **gpt-3.5-turbo**:目前最生成速度最快的 chatgpt 模型更快,但可能会牺牲一些生成文本的质量,上下文长度为 4k。
- **gpt-3.5-turbo-16k**:同 gpt-4上下文限制增加到 16k token同时费率更高。
- **gpt-4**ChatGPT 4.0 在语言理解和生成能力方面相对于 3.5 有所提升。它可以更好地理解上下文和语境,并生成更准确、自然的回答。这得益于 GPT-4 模型的改进,包括更好的语言建模和更深入的语义理解,但它的速度可能比其他模型慢,上下文长度为 8k。
- **gpt-4-32k**:同 gpt-4上下文限制增加到 32k token同时费率更高。
## 模型参数概念
LLM 看似很神奇,但本质还是一个概率问题,神经网络根据输入的文本,从预训练的模型里面生成一堆候选词,选择概率高的作为输出,相关的参数,大多都是跟采样有关(也就是要如何从候选词里选择输出)。
### `temperature`
用于控制模型输出的结果的随机性,这个值越大随机性越大。一般我们多次输入相同的 prompt 之后,模型的每次输出都不一样。
- 设置为 0对每个 prompt 都生成固定的输出
- 较低的值,输出更集中,更有确定性
- 较高的值,输出更随机(更有创意
<Callout>
一般来说prompt 越长,描述得越清楚,模型生成的输出质量就越好,置信度越高,这时可以适当调高
temperature 的值;反过来,如果 prompt 很短,很含糊,这时再设置一个比较高的 temperature
值,模型的输出就很不稳定了。
</Callout>
<br />
### `top_p`
核采样 top_p 也是采样参数,跟 temperature 不一样的采样方式。模型在输出之前,会生成一堆 token这些 token 根据质量高低排名,核采样模式中候选词列表是动态的,从 tokens 里按百分比选择候选词。 top-p 为选择 token 引入了随机性,让其他高分的 token 有被选择的机会,不会总是选最高分的。
<Callout>top_p 与随机性类似,一般来说不建议和随机性 temperature 一起更改</Callout>
<br />
### `presence_penalty`
Presence Penalty 参数可以看作是对生成文本中重复内容的一种惩罚。当该参数设置较高时,生成模型会尽量避免产生重复的词语、短语或句子。相反,如果 Presence Penalty 参数较低,则生成的文本可能会包含更多重复的内容。通过调整 Presence Penalty 参数的值,可以实现对生成文本的原创性和多样性的控制。参数的重要性主要体现在以下几个方面:
- 提高生成文本的独创性和多样性:在某些应用场景下,如创意写作、生成新闻标题等,需要生成的文本具有较高的独创性和多样性。通过增加 Presence Penalty 参数的值,可以有效减少生成文本中的重复内容,从而提高文本的独创性和多样性。
- 防止生成循环和无意义的内容:在某些情况下,生成模型可能会产生循环、重复的文本,这些文本通常无法传达有效的信息。通过适当增加 Presence Penalty 参数的值,可以降低生成这类无意义内容的概率,提高生成文本的可读性和实用性。
<Callout>
值得注意的是Presence Penalty 参数与其他参数(如 Temperature 和
top-p共同影响着生成文本的质量。对比其他参数Presence Penalty
参数主要关注文本的独创性和重复性,而 Temperature 和 top-p
参数则更多地影响着生成文本的随机性和确定性。通过合理地调整这些参数,可以实现对生成文本质量的综合控制
</Callout>
<br />
### `frequency_penalty`
是一种机制,通过对文本中频繁出现的新词汇施加惩罚,以减少模型重复同一词语的可能性,值越大,越有可能降低重复字词。
- `-2.0` 当早间新闻开始播出,我发现我家电视现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在现在 _频率最高的词是 “现在”,占比 44.79%_
- `-1.0` 他总是在清晨看新闻,在电视前看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看看 _频率最高的词是 “看”,占比 57.69%_
- `0.0` 当清晨的阳光洒进小餐馆时,一名疲倦的邮递员出现在门口,他的手中提着一袋信件。店主热情地为他准备了一份早餐,他在享用早餐的同时开始整理邮件。**(频率最高的词是 “的”,占比 8.45%**
- `1.0` 一个深度睡眠的女孩被一阵温暖的阳光唤醒她看到了早晨的第一缕阳光周围是鸟语花香一切都充满了生机。_频率最高的词是 “的”,占比 5.45%_
- `2.0` 每天早上,他都会在阳台上坐着吃早餐。在柔和的夕阳照耀下,一切看起来都非常宁静。然而有一天,当他准备端起早餐的时候,一只乐观的小鸟飞过,给他带来了一天的好心情。 _频率最高的词是 “的”,占比 4.94%_

96
docs/agents/prompt.mdx Normal file
View file

@ -0,0 +1,96 @@
import { Callout } from 'nextra/components';
# Guide to Using Prompts
## Basic Concepts of Prompts
Generative AI is very useful, but it requires human guidance. In most cases, generative AI can be as capable as a new intern at a company, but it needs clear instructions to perform well. The ability to guide generative AI correctly is a very powerful skill. You can guide generative AI by sending a prompt, which is usually a text instruction. A prompt is the input provided to the assistant, and it will affect the output. A good prompt should be structured, clear, concise, and directive.
## How to Write a Well-Structured Prompt
<Callout type={'info'}>
A structured prompt refers to the construction of the prompt having a clear logic and structure.
For example, if you want the model to generate an article, your prompt may need to include the
article's topic, outline, and style.{' '}
</Callout>
Let's look at a basic discussion prompt example:
> _"What are the most urgent environmental issues facing our planet, and what actions can individuals take to help address these issues?"_
We can convert it into a simple prompt for the assistant to answer the following questions: placed at the front.
```prompt
Answer the following questions:
What are the most urgent environmental issues facing our planet, and what actions can individuals take to help address these issues?
```
Since the results generated by this prompt are not consistent, some are only one or two sentences. A typical discussion response should have multiple paragraphs, so these results are not ideal. A good prompt should provide **specific formatting and content instructions**. You need to eliminate ambiguity in the language to improve consistency and quality. Here is a better prompt.
```prompt
Write a highly detailed paper, including an introduction, body, and conclusion, to answer the following questions:
What are the most urgent environmental issues facing our planet,
and what actions can individuals take to help address these issues?
```
The second prompt generates longer output and better structure. The use of the term "paper" in the prompt is intentional, as the assistant can understand the definition of a paper, making it more likely to generate coherent, structured responses.
## How to Improve Quality and Effectiveness
<Callout type={'info'}>
There are several ways to improve the quality and effectiveness of prompts:
- **Be Clear About Your Needs:** The model's output will strive to meet your needs, so if your needs are not clear, the output may not meet expectations.
- **Use Correct Grammar and Spelling:** The model will try to mimic your language style, so if your language style is problematic, the output may also be problematic.
- **Provide Sufficient Contextual Information:** The model will generate output based on the contextual information you provide, so if the information is insufficient, it may not produce the desired results.
</Callout>
After formulating effective prompts for discussing issues, you now need to refine the generated results. This may involve adjusting the output to fit constraints such as word count or combining concepts from different generated results.
A simple method of iteration is to generate multiple outputs and review them to understand the concepts and structures being used. Once the outputs have been evaluated, you can select the most suitable ones and combine them into a coherent response. Another iterative method is to start small and **gradually expand**. This requires more than one prompt: an initial prompt for drafting the initial one or two paragraphs, followed by additional prompts to expand on the content already written. Here is a potential philosophical discussion prompt:
> _"Is mathematics an invention or a discovery? Use careful reasoning to explain your answer."_
Add it to a simple prompt as follows:
```prompt
Write an opening paragraph to answer the following question:
Is mathematics an invention or a discovery? Use careful reasoning to explain your answer.
```
After generating some responses, we found one that we like:
```prompt
Philosophers and mathematicians have long debated whether mathematics is an invention or a discovery.
Some argue that the concepts and principles of mathematics exist independently of human thought, and therefore can be considered discovered rather than invented.
For example, the concepts of numbers, ratios, and geometric shapes are observed in the natural world and can be abstracted and systematically studied.
```
It's a good start. Then, use another prompt to expand:
```prompt
I am writing a detailed essay to answer the following prompt:
Is mathematics an invention or a discovery? Use careful reasoning to explain your answer. Here is my current situation: Philosophers and mathematicians have long debated whether mathematics is an invention or a discovery.
Some argue that the concepts and principles of mathematics exist independently of human thought, and therefore can be considered discovered rather than invented.
For example, the concepts of numbers, ratios, and geometric shapes are observed in the natural world and can be abstracted and systematically studied. Write the next paragraph of my essay.
```
I used this prompt to generate several extensions and selected one that I liked, resulting in the following:
```prompt
Philosophers and mathematicians have long debated whether mathematics is invented or discovered.
Some argue that the concepts and principles of mathematics exist independently of human thought, and therefore can be considered to be discovered rather than invented.
For example, the concepts of numbers, ratios, and geometric shapes are observed in the natural world and can be abstracted and systematically studied.
On the other hand, some believe that mathematics is largely an invention of human thought.
Mathematical principles are based on logical reasoning, which is a characteristic of human thought.
For instance, Euclidean geometry is based on abstract axioms and premises, accepted as true without the need for proof.
Therefore, geometry can be considered an invention of human thought rather than a discovery.
Similarly, mathematical formulas and equations are used to model and predict physical phenomena, which are the result of human reasoning.
```
Using the prompt extensions, we can iteratively write and iterate at each step. This is very useful for situations that require **generating higher quality output and incremental modifications**.
## Further Reading
- **Learn Prompting**: https://learnprompting.org/en-US/docs/intro

View file

@ -0,0 +1,97 @@
import { Callout } from 'nextra/components';
# Prompt 使用指南
## Prompt 基本概念
生成式 AI 非常有用,但它需要人类指导。通常情况下,生成式 AI 能就像公司新来的实习生一样,非常有能力,但需要清晰的指示才能做得好。能够正确地指导生成式 AI 是一项非常强大的技能。你可以通过发送一个 prompt 来指导生成式 AI这通常是一个文本指令。Prompt 是向助手提供的输入,它会影响输出结果。一个好的 Prompt 应该是结构化的,清晰的,简洁的,并且具有指向性。
## 如何写好一个结构化 prompt
<Callout type={'info'}>
结构化 prompt 是指 prompt 的构造应该有明确的逻辑和结构。例如,如果你想让模型生成一篇文章,你的
prompt 可能需要包括文章的主题,文章的大纲,文章的风格等信息。
</Callout>
让我们看一个基本的讨论问题的例子:
> _"我们星球面临的最紧迫的环境问题是什么,个人可以采取哪些措施来帮助解决这些问题?"_
我们可以将其转化为简单的助手提示,将回答以下问题:放在前面。
```prompt
回答以下问题:
我们星球面临的最紧迫的环境问题是什么,个人可以采取哪些措施来帮助解决这些问题?
```
由于这个提示生成的结果并不一致,有些只有一两个句子。一个典型的讨论回答应该有多个段落,因此这些结果并不理想。一个好的提示应该给出**具体的格式和内容指令**。您需要消除语言中的歧义以提高一致性和质量。这是一个更好的提示。
```prompt
写一篇高度详细的论文,包括引言、正文和结论段,回答以下问题:
我们星球面临的最紧迫的环境问题是什么,
个人可以采取哪些措施来帮助解决这些问题?
```
第二个提示生成了更长的输出和更好的结构。提示中使用 “论文” 一词是有意的,因为助手可以理解论文的定义,因此更有可能生成连贯的、结构化的回答。
<br />
## 如何提升其质量和效果
<Callout type={'info'}>
提升 prompt 质量和效果的方法主要有以下几点:
- **尽量明确你的需求:** 模型的输出会尽可能满足你的需求,所以如果你的需求不明确,输出可能会不如预期。
- **使用正确的语法和拼写:** 模型会尽可能模仿你的语言风格,所以如果你的语言风格有问题,输出可能也会有问题。
- **提供足够的上下文信息:** 模型会根据你提供的上下文信息生成输出,所以如果你提供的上下文信息不足,可能无法生成你想要的结果。
</Callout>
在为讨论问题制定有效的提示后,您现在需要细化生成的结果。这可能涉及到调整输出以符合诸如字数等限制,或将不同生成的结果的概念组合在一起。
迭代的一个简单方法是生成多个输出并查看它们,以了解正在使用的概念和结构。一旦评估了输出,您就可以选择最合适的输出并将它们组合成一个连贯的回答。另一种迭代的方法是逐步开始,然后**逐步扩展**。这需要不止一个提示:一个起始提示,用于撰写最初的一两段,然后是其他提示,以扩展已经写过的内容。以下是一个潜在的哲学讨论问题:
> _"数学是发明还是发现?用仔细的推理来解释你的答案。"_
将其添加到一个简单的提示中,如下所示:
```prompt
写一个起始段回答以下问题:
数学是发明还是发现?用仔细的推理来解释你的答案。
```
生成了一些回答,找到了一个我们喜欢的:
```prompt
哲学家和数学家之间争论不休,数学是发明还是发现。
一些人认为,数学的概念和原则独立于人类思维存在,因此可以被认为是被发现而不是被发明的。
例如,数字、比率和几何形状的概念在自然界中被观察到,可以被抽象化和系统化地研究。
```
是一个不错的开端。然后使用另一个提示来扩展:
```prompt
我正在撰写一篇详细的短文,回答以下提示:
数学是发明还是发现?用仔细的推理来解释你的答案。这是我目前的情况: 哲学家和数学家之间争论不休,数学是发明还是发现。
一些人认为,数学的概念和原则独立于人类思维存在,因此可以被认为是被发现而不是被发明的。
例如,数字、比率和几何形状的概念在自然界中被观察到,可以被抽象化和系统化地研究。写下我文章的下一段。
```
使用此提示生成了几个扩展,并选择了一个我喜欢的,最终得到了以下结果:
```prompt
哲学家和数学家之间争论不休,数学是发明还是发现。
一些人认为,数学的概念和原则独立于人类思维存在,因此可以被认为是被发现而不是被发明的。
例如,数字、比率和几何形状的概念在自然界中被观察到,可以被抽象化和系统化地研究。
另一方面,有人认为数学在很大程度上是人类思维的发明。
数学原理基于逻辑推理,这是人类思维的一个特征。
例如,欧几里德几何基于抽象的公理和前提,被接受为真实而无需证明。
因此,可以认为几何是人类思维的发明,而不是一种发现。
同样,数学公式和方程用于模拟和预测物理现象,这又是人类推理的结果。
```
使用扩展提示,我们可以逐步地写作并在每个步骤上进行迭代。这对于需要**生成更高质量的输出并希望逐步修改**的情况非常有用。
## 扩展阅读
- **Learn Prompting**: https://learnprompting.org/zh-Hans/docs/intro

7
docs/agents/topics.mdx Normal file
View file

@ -0,0 +1,7 @@
# Topic Usage Guide
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279602496-fd72037a-735e-4cc2-aa56-2994bceaba81.png)
- **Save Topic:** During a conversation, if you want to save the current context and start a new topic, you can click the save button next to the send button.
- **Topic List:** Clicking on a topic in the list allows for quick switching of historical conversation records and continuing the conversation. You can also use the star icon <kbd>⭐️</kbd> to pin favorite topics to the top, or use the more button on the right to rename or delete topics.

View file

@ -0,0 +1,6 @@
# 话题使用指南
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279602496-fd72037a-735e-4cc2-aa56-2994bceaba81.png)
- **保存话题:** 在聊天过程中,如果想要保存当前上下文并开启新的话题,可以点击发送按钮旁边的保存按钮。
- **话题列表:** 点击列表中的话题可以快速切换历史对话记录,并继续对话。你还可以通过点击星标图标 <kbd>⭐️</kbd> 将话题收藏置顶,或者通过右侧更多按钮对话题进行重命名和删除操作。

View file

@ -0,0 +1,18 @@
# Configuring `OPENAI_PROXY_URL` Environment Variable but Getting Empty Response
### Problem Description
After configuring the `OPENAI_PROXY_URL` environment variable, you may encounter a situation where the AI returns an empty message. This may be due to an incorrect configuration of `OPENAI_PROXY_URL`.
### Solution
Recheck and confirm whether `OPENAI_PROXY_URL` is set correctly, including whether the `/v1` suffix is added correctly (if required).
### Related Discussion Links
- [Why is the return value blank after configuring environment variables in Docker?](https://github.com/lobehub/lobe-chat/discussions/623)
- [Reasons for errors when using third-party interfaces](https://github.com/lobehub/lobe-chat/discussions/734)
- [No response when the proxy server address is filled in for chat](https://github.com/lobehub/lobe-chat/discussions/1065)
If the problem still cannot be resolved, it is recommended to raise the issue in the community, providing relevant logs and configuration information for other developers or maintainers to offer assistance.

View file

@ -0,0 +1,17 @@
# 配置 `OPENAI_PROXY_URL` 环境变量但返回值为空
### 问题描述
配置 `OPENAI_PROXY_URL` 环境变量后,可能会遇到 AI 的返回消息为空的情况。这可能是由于 `OPENAI_PROXY_URL` 配置不正确导致。
### 解决方案
重新检查并确认 `OPENAI_PROXY_URL` 是否设置正确,包括是否正确地添加了 `/v1` 后缀(如果需要)。
### 相关讨论链接
- [Docker 安装,配置好环境变量后,为何返回值是空白?](https://github.com/lobehub/lobe-chat/discussions/623)
- [使用第三方接口报错的原因](https://github.com/lobehub/lobe-chat/discussions/734)
- [代理服务器地址填了聊天没任何反应](https://github.com/lobehub/lobe-chat/discussions/1065)
如果问题依旧无法解决,建议在社区中提出问题,附上相关日志和配置信息,以便其他开发者或维护者提供帮助。

View file

@ -0,0 +1,70 @@
# Encounter `UNABLE_TO_VERIFY_LEAF_SIGNATURE` Error When Using Proxy
## Problem Description
When deploying privately and using a proxy (e.g., Surge) for network requests, you may encounter certificate verification errors.
The error message may look like this:
```bash
[TypeError: fetch failed] {
cause: [Error: unable to verify the first certificate] {
code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE'
}
}
```
Or:
```json
{
"endpoint": "https://api.openai.com/v1",
"error": {
"cause": {
"code": "UNABLE_TO_VERIFY_LEAF_SIGNATURE"
}
}
}
```
This problem typically occurs when using a proxy server with a self-signed certificate or a man-in-the-middle certificate that is not trusted by Node.js by default.
## Solution
To resolve this issue, you can bypass Node.js certificate validation by adding an environment variable when starting the application. Specifically, you can add `NODE_TLS_REJECT_UNAUTHORIZED=0` to the startup command. For example:
```bash
NODE_TLS_REJECT_UNAUTHORIZED=0 npm run start
```
Alternatively, when running in a Docker container, you can set the environment variable in the Dockerfile or docker-compose.yml:
```dockerfile
# In the Dockerfile
ENV NODE_TLS_REJECT_UNAUTHORIZED=0
```
```yaml
# In the docker-compose.yml
environment:
- NODE_TLS_REJECT_UNAUTHORIZED=0
```
Example Docker run command:
```bash
docker run -e NODE_TLS_REJECT_UNAUTHORIZED=0 <other parameters> <image name>
```
Please note that this method reduces security as it allows Node.js to accept unverified certificates. Therefore, it is only recommended for use in privately deployed environments with complete trust and should be reverted to the default certificate validation settings after resolving the certificate issue.
## More Secure Alternatives
If possible, it is recommended to address the certificate issue using the following methods:
1. Ensure all man-in-the-middle certificates are correctly installed on the proxy server and the corresponding clients.
2. Replace self-signed or man-in-the-middle certificates with valid certificates issued by trusted certificate authorities.
3. Properly configure the certificate chain in the code to ensure Node.js can validate to the root certificate.
Implementing these methods can resolve certificate validation issues without compromising security.

View file

@ -1,30 +1,12 @@
# 常见问题
# 使用代理时遇到 `UNABLE_TO_VERIFY_LEAF_SIGNATURE` 错误
## 配置 `OPENAI_PROXY_URL` 环境变量但返回值为空
## 问题描述
### 问题描述
在私有化部署时使用代理例如Surge进行网络请求可能会遇到证书验证错误。
配置 `OPENAI_PROXY_URL` 环境变量后,可能会遇到 AI 的返回消息为空的情况。这可能是由于 `OPENAI_PROXY_URL` 配置不正确导致。
此时错误信息可能如下:
### 解决方案
重新检查并确认 `OPENAI_PROXY_URL` 是否设置正确,包括是否正确地添加了 `/v1` 后缀(如果需要)。
### 相关讨论链接
- [Docker 安装,配置好环境变量后,为何返回值是空白?](https://github.com/lobehub/lobe-chat/discussions/623)
- [使用第三方接口报错的原因](https://github.com/lobehub/lobe-chat/discussions/734)
- [代理服务器地址填了聊天没任何反应](https://github.com/lobehub/lobe-chat/discussions/1065)
如果问题依旧无法解决,建议在社区中提出问题,附上相关日志和配置信息,以便其他开发者或维护者提供帮助。
## 使用代理中转请求时例如Surge报 `UNABLE_TO_VERIFY_LEAF_SIGNATURE` 错误
### 问题描述
在私有化部署时,进行网络请求可能会遇到证书验证错误。错误信息可能如下:
```
```bash
[TypeError: fetch failed] {
cause: [Error: unable to verify the first certificate] {
code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE'
@ -34,7 +16,7 @@
或者是:
```
```json
{
"endpoint": "https://api.openai.com/v1",
"error": {
@ -47,7 +29,7 @@
这个问题通常出现在使用代理服务器时,代理服务器使用的自签名证书或中间人证书不被 Node.js 默认信任。
### 解决方案
## 解决方案
要解决这个问题,可以在启动应用时添加环境变量,跳过 Node.js 的证书验证。具体做法是在启动命令中加入 `NODE_TLS_REJECT_UNAUTHORIZED=0`。例如:
@ -76,7 +58,7 @@ docker run -e NODE_TLS_REJECT_UNAUTHORIZED=0 <其他参数> <镜像名>
请注意,这种方法会降低安全性,因为它允许 Node.js 接受未经验证的证书。因此,仅建议在完全信任网络环境的私有化部署中使用,且需要在解决证书问题后恢复默认的证书验证设置。
### 更安全的替代方案
## 更安全的替代方案
如果可能,建议通过以下方法解决证书问题:

View file

@ -0,0 +1,215 @@
import { Callout } from 'nextra/components';
# Environment Variables
LobeChat provides some additional configuration options during deployment, which can be customized using environment variables.
## Common Variables
### `ACCESS_CODE`
- Type: Optional
- Description: Add a password to access the LobeChat service. You can set a long password to prevent brute force attacks.
- Default: -
- Example: `awCTe)re_r74` or `rtrt_ewee3@09!`
### `ENABLE_OAUTH_SSO`
- Type: Optional
- Description: Enable Single Sign-On (SSO) for LobeChat. Set to `1` to enable SSO. For more information, see [Authentication Services](#authentication-services).
- Default: -
- Example: `1`
### `NEXT_PUBLIC_BASE_PATH`
- Type: Optional
- Description: Add a `basePath` for LobeChat.
- Default: -
- Example: `/test`
### `DEFAULT_AGENT_CONFIG`
- Type: Optional
- Description: Used to configure the default settings for the LobeChat default agent. It supports various data types and structures, including key-value pairs, nested fields, array values, and more.
- Default: -
- Example: `'model=gpt-4-1106-preview;params.max_tokens=300;plugins=search-engine,lobe-image-designer`
The `DEFAULT_AGENT_CONFIG` is used to configure the default settings for the LobeChat default agent. It supports various data types and structures, including key-value pairs, nested fields, array values, and more. The table below provides detailed information on the configuration options, examples, and corresponding explanations for the `DEFAULT_AGENT_CONFIG` environment variable:
| Configuration Type | Example | Explanation |
| --- | --- | --- |
| Basic Key-Value Pair | `model=gpt-4` | Set the model to `gpt-4`. |
| Nested Field | `tts.sttLocale=en-US` | Set the language locale for the text-to-speech service to `en-US`. |
| Array | `plugins=search-engine,lobe-image-designer` | Enable the `search-engine` and `lobe-image-designer` plugins. |
| Chinese Comma | `plugins=search-enginelobe-image-designer` | Same as above, demonstrating support for Chinese comma separation. |
| Multiple Configurations | `model=glm-4;provider=zhipu` | Set the model to `glm-4` and the model provider to `zhipu`. |
| Numeric Value | `params.max_tokens=300` | Set the maximum tokens to `300`. |
| Boolean Value | `enableAutoCreateTopic=true` | Enable automatic topic creation. |
| Special Characters | `inputTemplate="Hello; I am a bot;"` | Set the input template to `Hello; I am a bot;`. |
| Error Handling | `model=gpt-4;maxToken` | Ignore invalid entry `maxToken` and only parse `model=gpt-4`. |
| Value Override | `model=gpt-4;model=gpt-4-1106-preview` | If a key is repeated, use the value that appears last; in this case, the value of `model` is `gpt-4-1106-preview`. |
Further reading:
- [\[RFC\] 022 - Default Assistant Parameters Configuration via Environment Variables](https://github.com/lobehub/lobe-chat/discussions/913)
## Identity Verification Service
### General Settings
#### `ENABLE_OAUTH_SSO`
- Type: Required
- Description: Enable single sign-on (SSO) for LobeChat. Set to `1` to enable single sign-on.
- Default: `-`
- Example: `1`
#### `NEXTAUTH_SECRET`
- Type: Required
- Description: Key used to encrypt the session tokens in Auth.js. You can generate the key using the following command: `openssl rand -base64 32`.
- Default: `-`
- Example: `Tfhi2t2pelSMEA8eaV61KaqPNEndFFdMIxDaJnS1CUI=`
### Auth0
<Callout>
Currently, we only support the Auth0 identity verification service provider. If you need to use
other identity verification service providers, you can submit a [feature
request](https://github.com/lobehub/lobe-chat/issues/new/choose) or Pull Request.
</Callout>
#### `AUTH0_CLIENT_ID`
- Type: Required
- Description: Client ID of the Auth0 application. You can access it [here][auth0-client-page] and navigate to the application settings to view.
- Default: `-`
- Example: `evCnOJP1UX8FMnXR9Xkj5t0NyFn5p70P`
#### `AUTH0_CLIENT_SECRET`
- Type: Required
- Description: Client Secret of the Auth0 application.
- Default: `-`
- Example: `wnX7UbZg85ZUzF6ioxPLnJVEQa1Elbs7aqBUSF16xleBS5AdkVfASS49-fQIC8Rm`
#### `AUTH0_ISSUER`
- Type: Required
- Description: Issuer/domain of the Auth0 application.
- Default: `-`
- Example: `https://example.auth0.com`
## Plugin Service
### `PLUGINS_INDEX_URL`
- Type: Optional
- Description: Index address of the LobeChat plugin market. If you have deployed the plugin market service on your own, you can use this variable to override the default plugin market address.
- Default: `https://chat-plugins.lobehub.com`
### `PLUGIN_SETTINGS`
- Type: Optional
- Description: Used to configure plugin settings. Use the format `plugin-name:setting-field=setting-value` to configure the settings of the plugin. Separate multiple setting fields with a semicolon `;`, and separate multiple plugin settings with a comma `,`.
- Default: `-`
- Example: `search-engine:SERPAPI_API_KEY=xxxxx,plugin-2:key1=value1;key2=value2`
The above example sets the `SERPAPI_API_KEY` of the `search-engine` plugin to `xxxxx`, and sets `key1` of `plugin-2` to `value1`, and `key2` to `value2`. The generated plugin settings configuration is as follows:
```json
{
"plugin-2": {
"key1": "value1",
"key2": "value2"
},
"search-engine": {
"SERPAPI_API_KEY": "xxxxx"
}
}
```
## Assistant Market
### `AGENTS_INDEX_URL`
- Type: Optional
- Description: Index address of the LobeChat assistant market. If you have deployed the assistant market service on your own, you can use this variable to override the default market address.
- Default: `https://chat-agents.lobehub.com`
## Data Statistics
### Vercel Analytics
#### `NEXT_PUBLIC_ANALYTICS_VERCEL`
- Type: Optional
- Description: Used to configure the environment variable for Vercel Analytics. Set to `1` to enable Vercel Analytics.
- Default: `-`
- Example: `1`
#### `NEXT_PUBLIC_VERCEL_DEBUG`
- Type: Optional
- Description: Used to enable the debug mode for Vercel Analytics.
- Default: `-`
- Example: `1`
### Posthog Analytics
#### `NEXT_PUBLIC_ANALYTICS_POSTHOG`
- Type: Optional
- Description: Used to enable the environment variable for [PostHog Analytics][posthog-analytics-url]. Set to `1` to enable PostHog Analytics.
- Default: `-`
- Example: `1`
#### `NEXT_PUBLIC_POSTHOG_KEY`
- Type: Optional
- Description: Set the PostHog project Key.
- Default: `-`
- Example: `phc_xxxxxxxx`
#### `NEXT_PUBLIC_POSTHOG_HOST`
- Type: Optional
- Description: Set the deployment address of the PostHog service, defaulting to the official SAAS address.
- Default: `https://app.posthog.com`
- Example: `https://example.com`
#### `NEXT_PUBLIC_POSTHOG_DEBUG`
- Type: Optional
- Description: Enable the debug mode for PostHog.
- Default: `-`
- Example: `1`
### Umami Analytics
#### `NEXT_PUBLIC_ANALYTICS_UMAMI`
- Type: Optional
- Description: Used to enable the environment variable for [Umami Analytics][umami-analytics-url]. Set to `1` to enable Umami Analytics.
- Default: `-`
- Example: `1`
#### `NEXT_PUBLIC_UMAMI_SCRIPT_URL`
- Type: Optional
- Description: The URL of the Umami script, defaulting to the script URL provided by Umami Cloud.
- Default: `https://analytics.umami.is/script.js`
- Example: `https://umami.your-site.com/script.js`
#### `NEXT_PUBLIC_UMAMI_WEBSITE_ID`
- Type: Required
- Description: Your Umami Website ID.
- Default: `-`
- Example: `E738D82A-EE9E-4806-A81F-0CA3CAE57F65`
[auth0-client-page]: https://manage.auth0.com/dashboard
[azure-api-verion-url]: https://docs.microsoft.com/zh-cn/azure/developer/javascript/api-reference/es-modules/azure-sdk/ai-translation/translationconfiguration?view=azure-node-latest#api-version
[openai-api-page]: https://platform.openai.com/account/api-keys
[posthog-analytics-url]: https://posthog.com
[umami-analytics-url]: https://umami.is

View file

@ -0,0 +1,214 @@
import { Callout } from 'nextra/components';
# 环境变量
LobeChat 在部署时提供了一些额外的配置项,你可以使用环境变量进行自定义设置。
## 通用变量
### `ACCESS_CODE`
- 类型:可选
- 描述:添加访问 LobeChat 服务的密码,你可以设置一个长密码以防被爆破
- 默认值:-
- 示例:`awCTe)re_r74` or `rtrt_ewee3@09!`
### `ENABLE_OAUTH_SSO`
- 类型:可选
- 描述:为 LobeChat 启用单点登录 (SSO)。设置为 `1` 以启用单点登录。有关详细信息,请参阅[身份验证服务](#身份验证服务)。
- 默认值: `-`
- 示例: `1`
### `NEXT_PUBLIC_BASE_PATH`
- 类型:可选
- 描述:为 LobeChat 添加 `basePath`
- 默认值: `-`
- 示例: `/test`
#### `DEFAULT_AGENT_CONFIG`
- 类型:可选
- 描述:用于配置 LobeChat 默认助理的默认配置。它支持多种数据类型和结构,包括键值对、嵌套字段、数组值等。
- 默认值:`-`
- 示例:`'model=gpt-4-1106-preview;params.max_tokens=300;plugins=search-engine,lobe-image-designer`
`DEFAULT_AGENT_CONFIG` 用于配置 LobeChat 默认助理的默认配置。它支持多种数据类型和结构,包括键值对、嵌套字段、数组值等。下表详细说明了 `DEFAULT_AGENT_CONFIG` 环境变量的配置项、示例以及相应解释:
| 配置项类型 | 示例 | 解释 |
| --- | --- | --- |
| 基本键值对 | `model=gpt-4` | 设置模型为 `gpt-4`。 |
| 嵌套字段 | `tts.sttLocale=en-US` | 设置文本到语音服务的语言区域为 `en-US`。 |
| 数组 | `plugins=search-engine,lobe-image-designer` | 启用 `search-engine` 和 `lobe-image-designer` 插件。 |
| 中文逗号 | `plugins=search-enginelobe-image-designer` | 同上,演示支持中文逗号分隔。 |
| 多个配置项 | `model=glm-4;provider=zhipu` | 设置模型为 `glm-4` 且模型服务商为 `zhipu`。 |
| 数字值 | `params.max_tokens=300` | 设置最大令牌数为 `300`。 |
| 布尔值 | `enableAutoCreateTopic=true` | 启用自动创建主题。 |
| 特殊字符 | `inputTemplate="Hello; I am a bot;"` | 设置输入模板为 `Hello; I am a bot;`。 |
| 错误处理 | `model=gpt-4;maxToken` | 忽略无效条目 `maxToken`,仅解析出 `model=gpt-4`。 |
| 值覆盖 | `model=gpt-4;model=gpt-4-1106-preview` | 如果键重复,使用最后一次出现的值,此处 `model` 的值为 `gpt-4-1106-preview`。 |
相关阅读:
- [\[RFC\] 022 - 环境变量配置默认助手参数](https://github.com/lobehub/lobe-chat/discussions/913)
## 身份验证服务
### 通用设置
#### `ENABLE_OAUTH_SSO`
- 类型:必选
- 描述:为 LobeChat 启用单点登录 (SSO)。设置为 `1` 以启用单点登录。
- 默认值: `-`
- 示例: `1`
#### `NEXTAUTH_SECRET`
- 类型:必选
- 描述:用于加密 Auth.js 会话令牌的密钥。您可以使用以下命令生成秘钥: `openssl rand -base64 32`.
- 默认值: `-`
- 示例: `Tfhi2t2pelSMEA8eaV61KaqPNEndFFdMIxDaJnS1CUI=`
### Auth0
<Callout>
目前我们只支持 Auth0 身份验证服务提供商。如果您需要使用其他身份验证服务提供商,可以提交
[功能请求](https://github.com/lobehub/lobe-chat/issues/new/choose) 或 Pull Request。
</Callout>
#### `AUTH0_CLIENT_ID`
- 类型:必选
- 描述: Auth0 应用程序的 Client ID您可以访问[这里][auth0-client-page]并导航至应用程序设置来查看
- 默认值: `-`
- 示例: `evCnOJP1UX8FMnXR9Xkj5t0NyFn5p70P`
#### `AUTH0_CLIENT_SECRET`
- 类型:必选
- 描述: Auth0 应用程序的 Client Secret
- 默认值: `-`
- 示例: `wnX7UbZg85ZUzF6ioxPLnJVEQa1Elbs7aqBUSF16xleBS5AdkVfASS49-fQIC8Rm`
#### `AUTH0_ISSUER`
- 类型:必选
- 描述: Auth0 应用程序的签发人 / 域
- 默认值: `-`
- 示例: `https://example.auth0.com`
## 插件服务
### `PLUGINS_INDEX_URL`
- 类型:可选
- 描述LobeChat 插件市场的索引地址,如果你自行部署了插件市场的服务,可以使用该变量来覆盖默认的插件市场地址
- 默认值:`https://chat-plugins.lobehub.com`
### `PLUGIN_SETTINGS`
- 类型:可选
- 描述:用于配置插件的设置,使用 `插件名:设置字段=设置值` 的格式来配置插件的设置,多个设置字段用英文分号 `;` 隔开,多个插件设置使用英文逗号`,`隔开。
- 默认值:`-`
- 示例:`search-engine:SERPAPI_API_KEY=xxxxx,plugin-2:key1=value1;key2=value2`
上述示例表示设置 `search-engine` 插件的 `SERPAPI_API_KEY` 为 `xxxxx`,设置 `plugin-2` 的 `key1` 为 `value1``key2` 为 `value2`。生成的插件设置配置如下:
```json
{
"plugin-2": {
"key1": "value1",
"key2": "value2"
},
"search-engine": {
"SERPAPI_API_KEY": "xxxxx"
}
}
```
## 助手市场
### `AGENTS_INDEX_URL`
- 类型:可选
- 描述LobeChat 助手市场的索引地址,如果你自行部署了助手市场的服务,可以使用该变量来覆盖默认的市场地址
- 默认值:`https://chat-agents.lobehub.com`
## 数据统计
### Vercel Analytics
#### `NEXT_PUBLIC_ANALYTICS_VERCEL`
- 类型:可选
- 描述:用于配置 Vercel Analytics 的环境变量,当设为 `1` 时开启 Vercel Analytics
- 默认值: `-`
- 示例:`1`
#### `NEXT_PUBLIC_VERCEL_DEBUG`
- 类型:可选
- 描述:用于开启 Vercel Analytics 的调试模式
- 默认值: `-`
- 示例:`1`
### Posthog Analytics
#### `NEXT_PUBLIC_ANALYTICS_POSTHOG`
- 类型:可选
- 描述:用于开启 [PostHog Analytics][posthog-analytics-url] 的环境变量,设为 `1` 时开启 PostHog Analytics
- 默认值: `-`
- 示例:`1`
#### `NEXT_PUBLIC_POSTHOG_KEY`
- 类型:可选
- 描述:设置 PostHog 项目 Key
- 默认值: `-`
- 示例:`phc_xxxxxxxx`
#### `NEXT_PUBLIC_POSTHOG_HOST`
- 类型:可选
- 描述:设置 PostHog 服务的部署地址,默认为官方的 SAAS 地址
- 默认值:`https://app.posthog.com`
- 示例:`https://example.com`
#### `NEXT_PUBLIC_POSTHOG_DEBUG`
- 类型:可选
- 描述:开启 PostHog 的调试模式
- 默认值: `-`
- 示例:`1`
### Umami Analytics
#### `NEXT_PUBLIC_ANALYTICS_UMAMI`
- 类型:可选
- 描述:用于开启 [Umami Analytics][umami-analytics-url] 的环境变量,设为 `1` 时开启 Umami Analytics
- 默认值: `-`
- 示例:`1`
#### `NEXT_PUBLIC_UMAMI_SCRIPT_URL`
- 类型:可选
- 描述Umami 脚本的网址,默认为 Umami Cloud 提供的脚本网址
- 默认值:`https://analytics.umami.is/script.js`
- 示例:`https://umami.your-site.com/script.js`
#### `NEXT_PUBLIC_UMAMI_WEBSITE_ID`
- 类型:必选
- 描述:你的 Umami 的 Website ID
- 默认值:`-`
- 示例:`E738D82A-EE9E-4806-A81F-0CA3CAE57F65`
[auth0-client-page]: https://manage.auth0.com/dashboard
[azure-api-verion-url]: https://docs.microsoft.com/zh-cn/azure/developer/javascript/api-reference/es-modules/azure-sdk/ai-translation/translationconfiguration?view=azure-node-latest#api-version
[openai-api-page]: https://platform.openai.com/account/api-keys
[posthog-analytics-url]: https://posthog.com
[umami-analytics-url]: https://umami.is

View file

@ -0,0 +1,135 @@
import { Callout } from 'nextra/components';
# Model Service Providers
When deploying LobeChat, a rich set of environment variables related to model service providers is provided, allowing you to easily define the model service providers to be enabled in LobeChat.
## OpenAI
### `OPENAI_API_KEY`
- Type: Required
- Description: This is the API key you applied for on the OpenAI account page, you can check it out [here](openai-api-page)
- Default: -
- Example: `sk-xxxxxx...xxxxxx`
### `OPENAI_PROXY_URL`
- Type: Optional
- Description: If you manually configure the OpenAI interface proxy, you can use this configuration item to override the default OpenAI API request base URL
- Default: `https://api.openai.com/v1`
- Example: `https://api.chatanywhere.cn` or `https://aihubmix.com/v1`
<Callout type={'warning'}>
Please check the request suffix of your proxy service provider. Some proxy service providers may
add `/v1` to the request suffix, while others may not. If you find that the AI returns an empty
message during testing, try adding the `/v1` suffix and retry.{' '}
</Callout>
<Callout type={'info'}>
Whether to fill in `/v1` is closely related to the model service provider. For example, the
default address of openai is `api.openai.com/v1`. If your proxy forwards the `/v1` interface, you
can simply fill in `proxy.com`. However, if the model service provider directly forwards the
`api.openai.com` domain, then you need to add `/v1` to the URL yourself.{' '}
</Callout>
Related discussions:
- [Why is the return value blank after installing Docker, configuring environment variables?](https://github.com/lobehub/lobe-chat/discussions/623)
- [Reasons for errors when using third-party interfaces](https://github.com/lobehub/lobe-chat/discussions/734)
- [No response in chat after filling in the proxy server address](https://github.com/lobehub/lobe-chat/discussions/1065)
### `CUSTOM_MODELS`
- Type: Optional
- Description: Used to control the model list, use `+` to add a model, use `-` to hide a model, use `model_name=display_name` to customize the display name of a model, separated by commas.
- Default: `-`
- Example: `+qwen-7b-chat,+glm-6b,-gpt-3.5-turbo,gpt-4-0125-preview=gpt-4-turbo`
The above example adds `qwen-7b-chat` and `glm-6b` to the model list, removes `gpt-3.5-turbo` from the list, and displays the name of `gpt-4-0125-preview` as `gpt-4-turbo`. If you want to disable all models first and then enable specific models, you can use `-all,+gpt-3.5-turbo`, which means only `gpt-3.5-turbo` will be enabled.
You can find all current model names in [modelProviders](https://github.com/lobehub/lobe-chat/tree/main/src/config/modelProviders).
## Azure OpenAI
If you need to use Azure OpenAI to provide model services, you can refer to the [Deploying with Azure OpenAI](../Deployment/Deploy-with-Azure-OpenAI.en-US.md) section for detailed steps. Here, we will list the environment variables related to Azure OpenAI.
### `USE_AZURE_OPENAI`
- Type: Optional
- Description: Set this value to `1` to enable Azure OpenAI configuration
- Default: -
- Example: `1`
### `AZURE_API_KEY`
- Type: Optional
- Description: This is the API key you applied for on the Azure OpenAI account page
- Default: -
- Example: `c55168be3874490ef0565d9779ecd5a6`
### `AZURE_API_VERSION`
- Type: Optional
- Description: The API version of Azure, following the format YYYY-MM-DD
- Default: `2023-08-01-preview`
- Example: `2023-05-15`, refer to [latest version][azure-api-verion-url]
## ZHIPU AI
### `ZHIPU_API_KEY`
- Type: Required
- Description: This is the API key you applied for in the ZHIPU AI service
- Default: -
- Example: `4582d332441a313f5c2ed9824d1798ca.rC8EcTAhgbOuAuVT`
## Moonshot AI
### `MOONSHOT_API_KEY`
- Type: Required
- Description: This is the API key you applied for in the Moonshot AI service
- Default: -
- Example: `Y2xpdGhpMzNhZXNoYjVtdnZjMWc6bXNrLWIxQlk3aDNPaXpBWnc0V1RaMDhSRmRFVlpZUWY=`
## Google AI
### `GOOGLE_API_KEY`
- Type: Required
- Description: This is the API key you applied for in the Google AI Platform to access Google AI services
- Default: -
- Example: `AIraDyDwcw254kwJaGjI9wwaHcdDCS__Vt3xQE`
## AWS Bedrock
### `AWS_ACCESS_KEY_ID`
- Type: Required
- Description: Access key ID for AWS service authentication
- Default: -
- Example: `AKIA5STVRLFSB4S9HWBR`
### `AWS_SECRET_ACCESS_KEY`
- Type: Required
- Description: Key for AWS service authentication
- Default: -
- Example: `Th3vXxLYpuKcv2BARktPSTPxx+jbSiFT6/0w7oEC`
### `AWS_REGION`
- Type: Optional
- Description: Region setting for AWS services
- Default: `us-east-1`
- Example: `us-east-1`
## Ollama
### `OLLAMA_PROXY_URL`
- Type: Optional
- Description: Used to enable the Ollama service, setting this will display optional open-source language models in the language model list and can also specify custom language models
- Default: -
- Example: `http://127.0.0.1:11434/v1`

View file

@ -0,0 +1,133 @@
import { Callout } from 'nextra/components';
# 模型服务商
LobeChat 在部署时提供了丰富的模型服务商相关的环境变量,你可以使用这些环境变量轻松定义需要在 LobeChat 中开启的模型服务商。
## OpenAI
### `OPENAI_API_KEY`
- 类型:必选
- 描述:这是你在 OpenAI 账户页面申请的 API 密钥,可以前往[这里][openai-api-page]查看
- 默认值:-
- 示例:`sk-xxxxxx...xxxxxx`
### `OPENAI_PROXY_URL`
- 类型:可选
- 描述:如果你手动配置了 OpenAI 接口代理,可以使用此配置项来覆盖默认的 OpenAI API 请求基础 URL
- 默认值:`https://api.openai.com/v1`
- 示例:`https://api.chatanywhere.cn` 或 `https://aihubmix.com/v1`
<Callout type={'warning'}>
请检查你的代理服务商的请求后缀,有的代理服务商会在请求后缀添加
`/v1`,有的则不会。如果你在测试时发现 AI 返回的消息为空,请尝试添加 `/v1` 后缀后重试。
</Callout>
<Callout type={'info'}>
是否填写 `/v1` 跟模型服务商有很大关系,比如 openai 的默认地址是 `api.openai.com/v1`
。如果你的代理商转发了 `/v1` 这个接口,那么直接填 `proxy.com` 即可。 但如果模型服务商是直接转发了
`api.openai.com` 域名,那么你就要自己加上 `/v1` 这个 url。
</Callout>
相关讨论:
- [Docker 安装,配置好环境变量后,为何返回值是空白?](https://github.com/lobehub/lobe-chat/discussions/623)
- [使用第三方接口报错的原因](https://github.com/lobehub/lobe-chat/discussions/734)
- [代理服务器地址填了聊天没任何反应](https://github.com/lobehub/lobe-chat/discussions/1065)
### `CUSTOM_MODELS`
- 类型:可选
- 描述:用来控制模型列表,使用 `+` 增加一个模型,使用 `-` 来隐藏一个模型,使用 `模型名=展示名` 来自定义模型的展示名,用英文逗号隔开。
- 默认值:`-`
- 示例:`+qwen-7b-chat,+glm-6b,-gpt-3.5-turbo,gpt-4-0125-preview=gpt-4-turbo`
上面示例表示增加 `qwen-7b-chat` 和 `glm-6b` 到模型列表,而从列表中删除 `gpt-3.5-turbo`,并将 `gpt-4-0125-preview` 模型名字展示为 `gpt-4-turbo`。如果你想先禁用所有模型,再启用指定模型,可以使用 `-all,+gpt-3.5-turbo`,则表示仅启用 `gpt-3.5-turbo`。
你可以在 [modelProviders](https://github.com/lobehub/lobe-chat/tree/main/src/config/modelProviders) 查找到当前的所有模型名。
## Azure OpenAI
如果你需要使用 Azure OpenAI 来提供模型服务,可以查阅 [使用 Azure OpenAI 部署](../Deployment/Deploy-with-Azure-OpenAI.zh-CN.md) 章节查看详细步骤,这里将列举和 Azure OpenAI 相关的环境变量。
### `USE_AZURE_OPENAI`
- 类型:可选
- 描述:设置该值为 `1` 开启 Azure OpenAI 配置
- 默认值:-
- 示例:`1`
### `AZURE_API_KEY`
- 类型:可选
- 描述:这是你在 Azure OpenAI 账户页面申请的 API 密钥
- 默认值:-
- 示例:`c55168be3874490ef0565d9779ecd5a6`
### `AZURE_API_VERSION`
- 类型:可选
- 描述Azure 的 API 版本,遵循 YYYY-MM-DD 格式
- 默认值:`2023-08-01-preview`
- 示例:`2023-05-15`,查阅[最新版本][azure-api-verion-url]
## 智谱 AI
### `ZHIPU_API_KEY`
- 类型:必选
- 描述:这是你在 智谱 AI 服务中申请的 API 密钥
- 默认值:-
- 示例:`4582d332441a313f5c2ed9824d1798ca.rC8EcTAhgbOuAuVT`
## Moonshot AI
### `MOONSHOT_API_KEY`
- 类型:必选
- 描述:这是你在 Moonshot AI 服务中申请的 API 密钥
- 默认值:-
- 示例:`Y2xpdGhpMzNhZXNoYjVtdnZjMWc6bXNrLWIxQlk3aDNPaXpBWnc0V1RaMDhSRmRFVlpZUWY=`
## Google AI
### `GOOGLE_API_KEY`
- 类型:必选
- 描述:这是你在 Google AI Platform 申请的 API 密钥,用于访问 Google AI 服务
- 默认值:-
- 示例:`AIraDyDwcw254kwJaGjI9wwaHcdDCS__Vt3xQE`
## AWS Bedrock
### `AWS_ACCESS_KEY_ID`
- 类型:必选
- 描述:用于 AWS 服务认证的访问键 ID
- 默认值:-
- 示例:`AKIA5STVRLFSB4S9HWBR`
### `AWS_SECRET_ACCESS_KEY`
- 类型:必选
- 描述:用于 AWS 服务认证的密钥
- 默认值:-
- 示例:`Th3vXxLYpuKcv2BARktPSTPxx+jbSiFT6/0w7oEC`
### `AWS_REGION`
- 类型:可选
- 描述AWS 服务的区域设置
- 默认值:`us-east-1`
- 示例:`us-east-1`
## Ollama
### `OLLAMA_PROXY_URL`
- 类型:可选
- 描述:用于启用 Ollama 服务,设置后可在语言模型列表内展示可选开源语言模型,也可以指定自定义语言模型
- 默认值:-
- 示例:`http://127.0.0.1:11434/v1`

5
docs/package.json Normal file
View file

@ -0,0 +1,5 @@
{
"name": "@lobehub-docs/lobe-chat",
"version": "1.0.0",
"private": true
}

View file

@ -0,0 +1,40 @@
# Plugin Usage
The plugin system is a key element in expanding the capabilities of assistants in LobeChat. You can enhance the assistant's abilities by enabling a variety of plugins.
Watch the following video to quickly get started with using LobeChat plugins:
<video
muted
controls
loop
autoPlay
src="https://github.com/lobehub/lobe-chat/assets/28616219/94d4c312-1699-4e24-8782-138883678c9e">
</video>
## Plugin Store
You can access the Plugin Store by navigating to "Extension Tools" -> "Plugin Store" in the session toolbar.
![820shots\_so](https://github.com/lobehub/lobe-chat/assets/28616219/ab4e60d0-1293-49ac-8798-cb29b3b789e6)
The Plugin Store allows you to directly install and use plugins within LobeChat.
![image](https://github.com/lobehub/lobe-chat/assets/28616219/d7a5d821-116f-4be6-8a1a-38d81a5ea0ea)
## Using Plugins
After installing a plugin, simply enable it under the current assistant to use it.
![809shots\_so](https://github.com/lobehub/lobe-chat/assets/28616219/76ab1ae7-a4f9-4285-8ebd-45b90251aba1)
## Plugin Configuration
Some plugins may require specific configurations, such as API keys.
After installing a plugin, you can click on "Settings" to enter the plugin's settings and fill in the required configurations:
![image](https://github.com/lobehub/lobe-chat/assets/28616219/10eb3023-4528-4b06-8092-062e7b3865cc)
![image](https://github.com/lobehub/lobe-chat/assets/28616219/ab2e4c25-4b11-431b-9266-442d8b14cb41)

View file

@ -4,7 +4,13 @@
查看以下视频,快速上手使用 LobeChat 插件:
<https://github.com/lobehub/lobe-chat/assets/28616219/94d4c312-1699-4e24-8782-138883678c9e>
<video
muted
controls
loop
autoPlay
src="https://github.com/lobehub/lobe-chat/assets/28616219/94d4c312-1699-4e24-8782-138883678c9e"
></video>
## 插件商店
@ -31,23 +37,3 @@
![image](https://github.com/lobehub/lobe-chat/assets/28616219/10eb3023-4528-4b06-8092-062e7b3865cc)
![image](https://github.com/lobehub/lobe-chat/assets/28616219/ab2e4c25-4b11-431b-9266-442d8b14cb41)
## 安装自定义插件
如果你希望安装一个不在 LobeChat 插件商店中的插件,例如自己开发的 LobeChat你可以点击「自定义插件」进行安装
<https://github.com/lobehub/lobe-chat/assets/28616219/034a328c-8465-4499-8f93-fdcdb03343cd>
此外LobeChat 的插件机制兼容了 ChatGPT 的插件,因此你可以一键安装相应的 ChatGPT 插件。
如果你希望尝试自行安装自定义插件,你可以使用以下链接来尝试:
- `自定义 Lobe 插件` Mock Credit Card<https://lobe-plugin-mock-credit-card.vercel.app/manifest.json>
- `ChatGPT 插件` Access Links<https://www.accesslinks.ai/.well-known/ai-plugin.json>
![image](https://github.com/lobehub/lobe-chat/assets/28616219/bb9cd00f-b20c-4d7b-9c60-b921d350e319)
![image](https://github.com/lobehub/lobe-chat/assets/28616219/bdeb678e-6502-4667-86b1-504221ee7ded)
## 开发插件
如果你希望自行开发一个 LobeChat 的插件,欢迎查阅 [Lobe 插件开发指南](https://chat-plugin-sdk.lobehub.com/zh-CN/guides/intro) 以扩展你的 AI 智能助手的可能性边界!

View file

@ -0,0 +1,27 @@
# Custom Plugins
## Installing Custom Plugins
If you wish to install a plugin that is not available in the LobeChat plugin store, such as a custom-developed LobeChat plugin, you can click on "Custom Plugins" to install it:
<video
muted
controls
loop
autoPlay
src="https://github.com/lobehub/lobe-chat/assets/28616219/034a328c-8465-4499-8f93-fdcdb03343cd"
></video>
In addition, LobeChat's plugin mechanism is compatible with ChatGPT plugins, so you can easily install corresponding ChatGPT plugins.
If you want to try installing custom plugins on your own, you can use the following links to try:
- `Custom Lobe Plugin` Mock Credit Card: https://lobe-plugin-mock-credit-card.vercel.app/manifest.json
- `ChatGPT Plugin` Access Links: https://www.accesslinks.ai/.well-known/ai-plugin.json
![image](https://github.com/lobehub/lobe-chat/assets/28616219/bb9cd00f-b20c-4d7b-9c60-b921d350e319)
![image](https://github.com/lobehub/lobe-chat/assets/28616219/bdeb678e-6502-4667-86b1-504221ee7ded)
## Developing Custom Plugins
If you wish to develop a LobeChat plugin on your own, feel free to refer to the [Plugin Development Guide](/en/usage/plugins/development) to expand the possibilities of your AI assistant!

View file

@ -0,0 +1,26 @@
# 自定义插件
## 安装自定义插件
如果你希望安装一个不在 LobeChat 插件商店中的插件,例如自己开发的 LobeChat你可以点击「自定义插件」进行安装
<video
muted
controls
loop
autoPlay
src="https://github.com/lobehub/lobe-chat/assets/28616219/034a328c-8465-4499-8f93-fdcdb03343cd"
></video>
此外LobeChat 的插件机制兼容了 ChatGPT 的插件,因此你可以一键安装相应的 ChatGPT 插件。
如果你希望尝试自行安装自定义插件,你可以使用以下链接来尝试:
- `自定义 Lobe 插件` Mock Credit Cardhttps://lobe-plugin-mock-credit-card.vercel.app/manifest.json
- `ChatGPT 插件` Access Linkshttps://www.accesslinks.ai/.well-known/ai-plugin.json
![image](https://github.com/lobehub/lobe-chat/assets/28616219/bb9cd00f-b20c-4d7b-9c60-b921d350e319) ![image](https://github.com/lobehub/lobe-chat/assets/28616219/bdeb678e-6502-4667-86b1-504221ee7ded)
## 开发自定义插件
如果你希望自行开发一个 LobeChat 的插件,欢迎查阅 [插件开发指南](/zh/usage/plugins/development) 以扩展你的 AI 智能助手的可能性边界!

View file

@ -0,0 +1,262 @@
# Plugin Development Guide
## Plugin Composition
A LobeChat plugin consists of the following components:
1. **Plugin Index**: Used to display basic information about the plugin, including the plugin name, description, author, version, and a link to the plugin manifest. The official plugin index can be found at [lobe-chat-plugins](https://github.com/lobehub/lobe-chat-plugins). If you want to publish a plugin to the official plugin marketplace, you need to [submit a PR](https://github.com/lobehub/lobe-chat-plugins/pulls) to this repository.
2. **Plugin Manifest**: Used to describe the functionality of the plugin, including the server-side description, frontend display information, and version number. For a detailed introduction to the manifest, see [manifest][manifest-docs-url].
3. **Plugin Services**: Used to implement the server-side and frontend modules described in the plugin manifest, as follows:
- **Server-side**: Needs to implement the interface capabilities described in the `api` section of the manifest.
- **Frontend UI** (optional): Needs to implement the interface described in the `ui` section of the manifest. This interface will be displayed in plugin messages, allowing for a richer display of information than plain text.
## Custom Plugin Workflow
This section will introduce how to add and use a custom plugin in LobeChat.
### **`1`** Create and Launch Plugin Project
You need to first create a plugin project locally, you can use the template we have prepared [lobe-chat-plugin-template][lobe-chat-plugin-template-url]
```bash
$ git clone https://github.com/lobehub/chat-plugin-template.git
$ cd chat-plugin-template
$ npm i
$ npm run dev
```
When you see `ready started server on 0.0.0.0:3400, url: http://localhost:3400`, it means the plugin service has been successfully launched locally.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/265259526-9ef25272-4312-429b-93bc-a95515727ed3.png)
### **`2`** Add Local Plugin in LobeChat Role Settings
Next, go to LobeChat, create a new assistant, and go to its session settings page:
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/265259643-1a9cc34a-76f3-4ccf-928b-129654670efd.png)
Click the <kbd>Add</kbd> button on the right of the plugin list to open the custom plugin adding popup:
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/265259748-2ef6a244-39bb-483c-b359-f156ffcbe1a4.png)
Fill in the **Plugin Description File Url** with `http://localhost:3400/manifest-dev.json`, which is the manifest address of the plugin we started locally.
At this point, you should see that the identifier of the plugin has been automatically recognized as `chat-plugin-template`. Next, you need to fill in the remaining form fields (only the title is required), and then click the <kbd>Save</kbd> button to complete the custom plugin addition.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/265259964-59f4906d-ae2e-4ec0-8b43-db36871d0869.png)
After adding, you can see the newly added plugin in the plugin list. If you need to modify the plugin configuration, you can click the <kbd>Settings</kbd> button on the far right to make changes.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/265260093-a0363c74-0b5b-48dd-b103-2db6b4a8262e.png)
### **`3`** Test Plugin Function in Session
Next, we need to test whether the plugin's function is working properly.
Click the <kbd>Back</kbd> button to return to the session area, and then send a message to the assistant: "What should I wear?" At this point, the assistant will try to ask you about your gender and current mood.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/265260291-f0aa0e7c-0ffb-486c-a834-08e73d49896f.png)
After answering, the assistant will initiate the plugin call, retrieve recommended clothing data from the server based on your gender and mood, and push it to you. Finally, it will provide a text summary based on this information.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/265260461-c22ae797-2809-464b-96fc-d0c020f4807b.png)
After completing these operations, you have understood the basic process of adding custom plugins and using them in LobeChat.
<br />
## Local Plugin Development
In the above process, we have learned how to add and use plugins. Next, we will focus on the process of developing custom plugins.
### Manifest
The `manifest` aggregates information on how the plugin's functionality is implemented. The core fields are `api` and `ui`, which respectively describe the server-side interface capabilities and the front-end rendering interface address of the plugin.
Taking the `manifest` in the template we provided as an example:
```json
{
"api": [
{
"url": "http://localhost:3400/api/clothes",
"name": "recommendClothes",
"description": "Recommend clothes to the user based on their mood",
"parameters": {
"properties": {
"mood": {
"description": "The user's current mood, with optional values: happy, sad, anger, fear, surprise, disgust",
"enums": ["happy", "sad", "anger", "fear", "surprise", "disgust"],
"type": "string"
},
"gender": {
"type": "string",
"enum": ["man", "woman"],
"description": "The user's gender, which needs to be asked for from the user to obtain this information"
}
},
"required": ["mood", "gender"],
"type": "object"
}
}
],
"gateway": "http://localhost:3400/api/gateway",
"identifier": "chat-plugin-template",
"ui": {
"url": "http://localhost:3400",
"height": 200
},
"version": "1"
}
```
In this manifest, it mainly includes the following parts:
1. `identifier`: This is the unique identifier of the plugin, used to distinguish different plugins. This field needs to be globally unique.
2. `api`: This is an array containing all the API interface information of the plugin. Each interface includes the url, name, description, and parameters fields, all of which are required. The `description` and `parameters` fields will be sent to GPT as the `functions` parameter of the [Function Call](https://sspai.com/post/81986), and the parameters need to comply with the [JSON Schema](https://json-schema.org/) specification. In this example, the API interface is named `recommendClothes`, and its function is to recommend clothes based on the user's mood and gender. The interface parameters include the user's mood and gender, both of which are required.
3. `ui`: This field contains information about the plugin's user interface, indicating from which address LobeChat loads the plugin's front-end interface. Since LobeChat plugin interface loading is implemented based on iframes, the height and width of the plugin interface can be specified as needed.
4. `gateway`: This field specifies the gateway for LobeChat to query the plugin's API interface. LobeChat's default plugin gateway is a cloud-based service, and requests for custom plugins need to be sent to a locally launched service. Remote calls to a local address are generally not feasible. The `gateway` field solves this problem. By specifying the gateway in the manifest, LobeChat will send plugin requests to this address, and the local gateway address will dispatch requests to the local plugin service. Published online plugins do not need to specify this field.
5. `version`: This is the version number of the plugin, which currently has no effect.
In actual development, you can modify the plugin's description list according to your own needs to declare the functionality you want to implement. For a complete introduction to each field in the manifest, see: [manifest][manifest-docs-url].
### Project Structure
The [lobe-chat-plugin-template][lobe-chat-plugin-template-url] template project uses Next.js as the development framework, and its core directory structure is as follows:
```
➜ chat-plugin-template
├── public
│ └── manifest-dev.json # Manifest file
├── src
│ └── pages
│ │ ├── api # Next.js server-side folder
│ │ │ ├── clothes.ts # Implementation of the recommendClothes interface
│ │ │ └── gateway.ts # Local plugin proxy gateway
│ │ └── index.tsx # Front-end display interface
```
This template uses Next.js as the development framework. You can use any development framework and language you are familiar with, as long as it can implement the functionality described in the manifest.
Contributions of more plugin templates using different frameworks and languages are also welcome.
### Server-Side
The server-side needs to implement the API interfaces described in the manifest. In the template, we use Vercel's [Edge Runtime](https://nextjs.org/docs/pages/api-reference/edge) to eliminate the need for maintenance.
#### API Implementation
For the Edge Runtime, we provide the `createErrorResponse` method in `@lobehub/chat-plugin-sdk` to quickly return error responses. Currently, the provided error types are detailed in: [PluginErrorType][plugin-error-type-url].
The implementation of the clothes interface in the template is as follows:
```ts
export default async (req: Request) => {
if (req.method !== 'POST') return createErrorResponse(PluginErrorType.MethodNotAllowed);
const { gender, mood } = (await req.json()) as RequestData;
const clothes = gender === 'man' ? manClothes : womanClothes;
const result: ResponseData = {
clothes: clothes[mood] || [],
mood,
today: Date.now(),
};
return new Response(JSON.stringify(result));
};
```
Where `manClothes` and `womanClothes` are mock data and can be replaced with database queries in actual scenarios.
#### Plugin Gateway
Since the default plugin gateway for LobeChat is a cloud service `/api/plugins`, the cloud service sends requests to the address specified in the manifest's `api.url` to solve cross-origin issues.
For custom plugins, plugin requests need to be sent to the local service. Therefore, by specifying the gateway in the manifest (http://localhost:3400/api/gateway), LobeChat> will directly request this address, and then only the corresponding gateway needs to be created at that address.
```ts
import { createLobeChatPluginGateway } from '@lobehub/chat-plugins-gateway';
export const config = {
runtime: 'edge',
};
export default createLobeChatPluginGateway();
```
[`@lobehub/chat-plugins-gateway`](https://github.com/lobehub/chat-plugins-gateway) contains the implementation of the plugin gateway in LobeChat [here](https://github.com/lobehub/lobe-chat/blob/main/src/pages/api/plugins.api.ts). You can use this package directly to create a gateway, allowing LobeChat to access the local plugin service.
### Plugin UI Interface
The custom UI interface for plugins is optional. For example, the official plugin [Web Content Extraction](https://github.com/lobehub/chat-plugin-web-crawler) does not have a corresponding user interface.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/265263241-0e765fdc-3463-4c36-a398-aef177a30df9.png)
If you want to display richer information in plugin messages or include some interactive operations, you can customize a user interface for the plugin. For example, the following image shows the user interface for the [Search Engine](https://github.com/lobehub/chat-plugin-search-engine) plugin.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/265263427-9bdc03d5-aa61-4f62-a2ce-88683f3308d8.png)
#### Implementation of Plugin UI Interface
LobeChat implements the loading of plugin UI through `iframe` and uses `postMessage` to communicate with the plugin. Therefore, the implementation of the plugin UI is consistent with regular web development. You can use any frontend framework and development language you are familiar with.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/265263653-4ea87abc-249a-49f3-a241-7ed93ddb1ddf.png)
In the template we provide, we use React + Next.js + [antd](https://ant.design/) as the frontend interface framework. You can find the implementation of the user interface in [`src/pages/index.tsx`](https://github.com/lobehub/chat-plugin-template/blob/main/src/pages/index.tsx).
As for plugin communication, we provide relevant methods in [`@lobehub/chat-plugin-sdk`](https://github.com/lobehub/chat-plugin-sdk) to simplify communication between the plugin and LobeChat. You can actively retrieve the current message data from LobeChat using the `fetchPluginMessage` method. For detailed information about this method, see: [fetchPluginMessage][fetch-plugin-message-url].
```tsx
import { fetchPluginMessage } from '@lobehub/chat-plugin-sdk';
import { memo, useEffect, useState } from 'react';
import { ResponseData } from '@/type';
const Render = memo(() => {
const [data, setData] = useState<ResponseData>();
useEffect(() => {
// Retrieve the current plugin message from LobeChat
fetchPluginMessage().then((e: ResponseData) => {
setData(e);
});
}, []);
return <>...</>;
});
export default Render;
```
## Plugin Deployment and Release
Once you have finished developing the plugin, you can deploy it using your preferred method, such as using Vercel or packaging it as a Docker container for release, and so on.
If you want more people to use your plugin, feel free to [submit it for listing](https://github.com/lobehub/lobe-chat-plugins) on the plugin marketplace.
[![][submit-plugin-shield]][submit-plugin-url]
### Plugin Shield
[![lobe-chat-plugin](https://img.shields.io/badge/%F0%9F%A4%AF%20%26%20%F0%9F%A7%A9%20LobeHub-Plugin-95f3d9?labelColor=black&style=flat-square)](https://github.com/lobehub/lobe-chat-plugins)
```md
[![lobe-chat-plugin](https://img.shields.io/badge/%F0%9F%A4%AF%20%26%20%F0%9F%A7%A9%20LobeHub-Plugin-95f3d9?labelColor=black&style=flat-square)](https://github.com/lobehub/lobe-chat-plugins)
```
## Links
- **📘 Pluging SDK Documentation**: https://chat-plugin-sdk.lobehub.com
- **🚀 chat-plugin-template**: https://github.com/lobehub/chat-plugin-template
- **🧩 chat-plugin-sdk**: https://github.com/lobehub/chat-plugin-sdk
- **🚪 chat-plugin-gateway**: https://github.com/lobehub/chat-plugins-gateway
- **🏪 lobe-chat-plugins**: https://github.com/lobehub/lobe-chat-plugins
[fetch-plugin-message-url]: https://github.com/lobehub/chat-plugin-template
[lobe-chat-plugin-template-url]: https://github.com/lobehub/chat-plugin-template
[manifest-docs-url]: https://chat-plugin-sdk.lobehub.com/guides/plugin-manifest
[plugin-error-type-url]: https://github.com/lobehub/chat-plugin-template
[submit-plugin-shield]: https://img.shields.io/badge/🧩/🏪_submit_plugin-%E2%86%92-95f3d9?labelColor=black&style=for-the-badge
[submit-plugin-url]: https://github.com/lobehub/lobe-chat-plugins

View file

@ -1,21 +1,5 @@
# 插件开发指南
#### TOC
- [插件构成](#插件构成)
- [自定义插件流程](#自定义插件流程)
- [**`1`** 创建并启动插件项目](#1-创建并启动插件项目)
- [**`2`** 在 LobeChat 角色设置中添加本地插件](#2-在-lobechat-角色设置中添加本地插件)
- [**`3`** 会话测试插件功能](#3-会话测试插件功能)
- [本地插件开发](#本地插件开发)
- [manifest](#manifest)
- [项目结构](#项目结构)
- [服务端](#服务端)
- [插件 UI 界面](#插件-ui-界面)
- [插件部署与发布](#插件部署与发布)
- [插件 Shield](#插件-shield)
- [链接](#链接)
## 插件构成
一个 LobeChat 的插件由以下几个部分组成:
@ -26,8 +10,6 @@
- **服务端**:需要实现 manifest 中描述的 `api` 部分的接口能力;
- **前端 UI**(可选):需要实现 manifest 中描述的 `ui` 部分的界面,该界面将会在插件消息中透出,进而实现比文本更加丰富的信息展示方式。
<br/>
## 自定义插件流程
本节将会介绍如何在 LobeChat 中添加和使用一个自定义插件。
@ -53,7 +35,7 @@ $ npm run dev
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/265259643-1a9cc34a-76f3-4ccf-928b-129654670efd.png)
点击插件列表右侧的 <kbd>添加<kbd/> 按钮,打开自定义插件添加弹窗:
点击插件列表右侧的 <kbd>添加</kbd> 按钮,打开自定义插件添加弹窗:
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/265259748-2ef6a244-39bb-483c-b359-f156ffcbe1a4.png)
@ -81,7 +63,7 @@ $ npm run dev
当完成这些操作后,你已经了解了添加自定义插件,并在 LobeChat 中使用的基础流程。
<br/>
<br />
## 本地插件开发
@ -192,7 +174,7 @@ export default async (req: Request) => {
由于 LobeChat 默认的插件网关是云端服务 `/api/plugins`,云端服务通过 manifest 上的 `api.url` 地址发送请求,以解决跨域问题。
针对自定义插件,插件请求需要发送给本地服务, 因此通过在 manifest 中指定网关 (<http://localhost:3400/api/gateway>)LobeChat 将会直接请求该地址,然后只需要在该地址下创建对应的网关即可。
针对自定义插件,插件请求需要发送给本地服务, 因此通过在 manifest 中指定网关 (http://localhost:3400/api/gateway)LobeChat 将会直接请求该地址,然后只需要在该地址下创建对应的网关即可。
```ts
import { createLobeChatPluginGateway } from '@lobehub/chat-plugins-gateway';
@ -201,7 +183,7 @@ export const config = {
runtime: 'edge',
};
export default async createLobeChatPluginGateway();
export default createLobeChatPluginGateway();
```
[`@lobehub/chat-plugins-gateway`](https://github.com/lobehub/chat-plugins-gateway) 包含了 LobeChat 中插件网关的[实现](https://github.com/lobehub/lobe-chat/blob/main/src/pages/api/plugins.api.ts),你可以直接使用该包创建网关,进而让 LobeChat 访问到本地的插件服务。
@ -248,8 +230,6 @@ const Render = memo(() => {
export default Render;
```
<br/>
## 插件部署与发布
当你完成插件的开发后,你可以使用你习惯的方式进行插件的部署。例如使用 vercel ,或者打包成 docker 发布等等。
@ -266,17 +246,13 @@ export default Render;
[![lobe-chat-plugin](https://img.shields.io/badge/%F0%9F%A4%AF%20%26%20%F0%9F%A7%A9%20LobeHub-Plugin-95f3d9?labelColor=black&style=flat-square)](https://github.com/lobehub/lobe-chat-plugins)
```
<br/>
## 链接
- **📘 Pluging SDK 文档**: <https://chat-plugin-sdk.lobehub.com>
- **🚀 chat-plugin-template**: <https://github.com/lobehub/chat-plugin-template>
- **🧩 chat-plugin-sdk**: <https://github.com/lobehub/chat-plugin-sdk>
- **🚪 chat-plugin-gateway**: <https://github.com/lobehub/chat-plugins-gateway>
- **🏪 lobe-chat-plugins**: <https://github.com/lobehub/lobe-chat-plugins>
<!-- LINK GROUP -->
- **📘 Pluging SDK 文档**: https://chat-plugin-sdk.lobehub.com
- **🚀 chat-plugin-template**: https://github.com/lobehub/chat-plugin-template
- **🧩 chat-plugin-sdk**: https://github.com/lobehub/chat-plugin-sdk
- **🚪 chat-plugin-gateway**: https://github.com/lobehub/chat-plugins-gateway
- **🏪 lobe-chat-plugins**: https://github.com/lobehub/lobe-chat-plugins
[fetch-plugin-message-url]: https://github.com/lobehub/chat-plugin-template
[lobe-chat-plugin-template-url]: https://github.com/lobehub/chat-plugin-template

View file

@ -0,0 +1,10 @@
# Plugin Store
You can access the plugin store by going to "Extension Tools" -> "Plugin Store" in the session toolbar.
![](https://github.com/lobehub/lobe-chat/assets/28616219/ab4e60d0-1293-49ac-8798-cb29b3b789e6)
In the plugin store, you can directly install and use plugins in LobeChat.
![](https://github.com/lobehub/lobe-chat/assets/28616219/d7a5d821-116f-4be6-8a1a-38d81a5ea0ea)

View file

@ -0,0 +1,9 @@
# 插件商店
你可以在会话工具条中的 「扩展工具」 -> 「插件商店」,进入插件商店。
![820shots_so](https://github.com/lobehub/lobe-chat/assets/28616219/ab4e60d0-1293-49ac-8798-cb29b3b789e6)
插件商店中会在 LobeChat 中可以直接安装并使用的插件。
![image](https://github.com/lobehub/lobe-chat/assets/28616219/d7a5d821-116f-4be6-8a1a-38d81a5ea0ea)

View file

@ -0,0 +1,20 @@
import { Callout } from 'nextra/components';
# Data Analysis
To better help analyze the usage of LobeChat users, we have integrated several free/open-source data analytics services in LobeChat for collecting user usage data, which you can enable as needed.
<Callout type={'warning'}>
Currently, the integrated data analytics platforms only support deployment and usage on
Vercel/Zeit platforms and do not support Docker/Docker Compose deployment.
</Callout>
## Vercel Analytics
[Vercel Analytics](https://vercel.com/analytics) is a data analytics service launched by Vercel, which can help you collect website visit data, including traffic, sources, and devices used for access.
We have integrated Vercel Analytics into the code, and you can enable it by setting the environment variable `NEXT_PUBLIC_ANALYTICS_VERCEL=1`, and then open the Analytics tab in your Vercel deployment project to view your app's visit data.
Vercel Analytics provides 2500 free Web Analytics Events per month (which can be understood as page views), which is generally sufficient for personal deployment and self-use products.
If you need to learn more about using Vercel Analytics, please refer to the [Vercel Web Analytics Quick Start](https://vercel.com/docs/analytics/quickstart).

View file

@ -1,11 +1,10 @@
# 数据统计
import {Callout} from "nextra/components";
# 数据分析
为更好地帮助分析 LobeChat 的用户使用情况,我们在 LobeChat 中集成了若干免费 / 开源的数据统计服务,用于收集用户的使用情况,你可以按需开启。
#### TOC
- [Vercel Analytics](#vercel-analytics)
- [🚧 Posthog](#-posthog)
<Callout type={'warning'}>目前集成的数据分析平台,均只支持 Vercel / Zeabur 平台部署使用,不支持 Docker/Docker Compose 部署</Callout>
## Vercel Analytics
@ -15,6 +14,4 @@
Vercel Analytics 提供了 2500 次 / 月的免费 Web Analytics Events (可以理解为 PV),对于个人部署自用的产品来说基本够用。
如果你需要了解 Vercel Analytics 的详细使用教程,请查阅[Vercel Web Analytics 快速开始](https://vercel.com/docs/analytics/quickstart)
## 🚧 Posthog
如果你需要了解 Vercel Analytics 的详细使用教程,请查阅 [Vercel Web Analytics 快速开始](https://vercel.com/docs/analytics/quickstart)

View file

@ -0,0 +1,94 @@
import { Callout, Steps } from 'nextra/components';
# Identity Verification Service
LobeChat supports configuring external identity verification services for internal use by enterprises/organizations to centrally manage user authorization. Currently, it supports [Auth0][auth0-client-page]. This article will introduce how to configure the identity verification service.
## Configure Identity Verification Service
<Steps>
### Create Auth0 Application
Register and log in to [Auth0][auth0-client-page], click on the "Applications" in the left navigation bar to switch to the application management interface, and click "Create Application" in the upper right corner to create an application.
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/1b405347-f4c3-4c55-82f6-47116f2210d0)
Fill in the application name you want to display to the organization users, choose any application type, and click "Create".
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/75c92f85-3ad3-4473-a9c6-e667e28d428d)
After successful creation, click on the corresponding application to enter the application details page, switch to the "Settings" tab, and you can see the corresponding configuration information.
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/a1ed996b-95ef-4b7d-a50d-b4666eccfecb)
In the application configuration page, you also need to configure Allowed Callback URLs, where you should fill in:
```bash
http(s)://your-domain/api/auth/callback/auth0
```
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/575f46aa-f485-49bd-8b90-dbb1ce1a5c1b)
<Callout type={'info'}>
You can fill in or modify Allowed Callback URLs after deployment, but make sure the filled URL is
consistent with the deployed URL.
</Callout>
### Add Users
Click on the "Users Management" in the left navigation bar to enter the user management interface, where you can create users for your organization to log in to LobeChat.
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/3b8127ab-dc4f-4ff9-a4cb-dec3ef0295cc)
### Configure Environment Variables
When deploying LobeChat, you need to configure the following environment variables:
| Environment Variable | Type | Description |
| --- | --- | --- |
| `ENABLE_OAUTH_SSO` | Required | Enable single sign-on (SSO) for LobeChat. Set to `1` to enable single sign-on. |
| `NEXTAUTH_SECRET` | Required | Key used to encrypt Auth.js session tokens. You can generate a key using the following command: `openssl rand -base64 32` |
| `AUTH0_CLIENT_ID` | Required | Client ID of the Auth0 application |
| `AUTH0_CLIENT_SECRET` | Required | Client Secret of the Auth0 application |
| `AUTH0_ISSUER` | Required | Domain of the Auth0 application, `https://example.auth0.com` |
| `ACCESS_CODE` | Required | Add a password to access this service. You can set a sufficiently long random password to "disable" access code authorization. |
You can refer to the related variable details at [Environment Variables](/en/self-hosting/environment-variable#auth0).
</Steps>
<Callout>
After successful deployment, users will be able to authenticate and use LobeChat using the users
configured in Auth0.
</Callout>
## Advanced Configuration
### Connecting to an Existing Single Sign-On Service
If your enterprise or organization already has a unified identity authentication infrastructure, you can connect to an existing single sign-on service in Applications -> SSO Integrations.
Auth0 supports single sign-on services such as Azure Active Directory, Slack, Google Workspace, Office 365, Zoom, and more. For a detailed list of supported services, please refer to [this link][auth0-sso-integrations].
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/32650f4f-d0b0-4843-b26d-d35bad11d8a3)
### Configuring Social Login
If your enterprise or organization needs to support external user logins, you can configure social login services in Authentication -> Social.
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/7b6f6a6c-2686-49d8-9dbd-0516053f1efa)
<Callout type={'warning'}>
Configuring social login services by default allows anyone to authenticate, which may lead to
LobeChat being abused by external users.
</Callout>
<Callout>
If you need to restrict login users, be sure to configure a **blocking policy**: After enabling
the social login option, refer to [this article][auth0-login-actions-manual] to create an Action
to set up a blocking/allow list.
</Callout>
[auth0-client-page]: https://manage.auth0.com/dashboard
[auth0-login-actions-manual]: https://auth0.com/blog/permit-or-deny-login-requests-using-auth0-actions/
[auth0-sso-integrations]: https://marketplace.auth0.com/features/sso-integrations

View file

@ -0,0 +1,90 @@
import { Callout, Steps } from 'nextra/components';
# 身份验证服务
LobeChat 支持配置外部身份验证服务,供企业 / 组织内部使用,统一管理用户授权,目前支持 [Auth0][auth0-client-page],本文将介绍如何配置身份验证服务。
## 配置身份验证服务
<Steps>
### 创建 Auth0 应用
注册并登录 [Auth0][auth0-client-page]点击左侧导航栏的「Applications」切换到应用管理界面点击右上角「Create Application」以创建应用。
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/1b405347-f4c3-4c55-82f6-47116f2210d0)
填写你想向组织用户显示的应用名称可选择任意应用类型点击「Create」。
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/75c92f85-3ad3-4473-a9c6-e667e28d428d)
创建成功后点击相应的应用进入应用详情页切换到「Settings」标签页就可以看到相应的配置信息
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/a1ed996b-95ef-4b7d-a50d-b4666eccfecb)
在应用配置页面中,还需要配置 Allowed Callback URLs在此处填写:
```bash
http(s)://your-domain/api/auth/callback/auth0
```
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/575f46aa-f485-49bd-8b90-dbb1ce1a5c1b)
{' '}
<Callout type={'info'}>
可以在部署后再填写或修改 Allowed Callback URLs但是务必保证填写的 URL 与部署的 URL 一致
</Callout>
### 新增用户
点击左侧导航栏的「Users Management」进入用户管理界面可以为你的组织新建用户用以登录 LobeChat
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/3b8127ab-dc4f-4ff9-a4cb-dec3ef0295cc)
### 配置环境变量
在部署 LobeChat 时,你需要配置以下环境变量:
| 环境变量 | 类型 | 描述 |
| --- | --- | --- |
| `ENABLE_OAUTH_SSO` | 必选 | 为 LobeChat 启用单点登录 (SSO)。设置为 `1` 以启用单点登录。 |
| `NEXTAUTH_SECRET` | 必选 | 用于加密 Auth.js 会话令牌的密钥。您可以使用以下命令生成秘钥: `openssl rand -base64 32` |
| `AUTH0_CLIENT_ID` | 必选 | Auth0 应用程序的 Client ID |
| `AUTH0_CLIENT_SECRET` | 必选 | Auth0 应用程序的 Client Secret |
| `AUTH0_ISSUER` | 必选 | Auth0 应用程序的 Domain`https://example.auth0.com` |
| `ACCESS_CODE` | 必选 | 添加访问此服务的密码,你可以设置一个足够长的随机密码以 “禁用” 访问码授权 |
前往 [环境变量](/zh/self-hosting/environment-variable#auth0) 可查阅相关变量详情。
</Steps>
<Callout>部署成功后,用户将可以使用 Auth0 中配置的用户通过身份认证并使用 LobeChat。</Callout>
## 进阶配置
### 连接现有的单点登录服务
如果你的企业或组织已有现有的统一身份认证设施,可在 Applications -> SSO Integrations 中,连接现有的单点登录服务。
Auth0 支持 Azure Active Directory / Slack / Google Workspace / Office 365 / Zoom 等单点登录服务,详细支持列表可参考 [这里][auth0-sso-integrations]
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/32650f4f-d0b0-4843-b26d-d35bad11d8a3)
### 配置社交登录
如果你的企业或组织需要支持外部人员登录,可以在 Authentication -> Social 中,配置社交登录服务。
![](https://github.com/CloudPassenger/lobe-chat/assets/30863298/7b6f6a6c-2686-49d8-9dbd-0516053f1efa)
<Callout type={'warning'}>
配置社交登录服务默认会允许所有人通过认证,这可能会导致 LobeChat 被外部人员滥用。
</Callout>
<Callout>
如果你需要限制登录人员,务必配置 **阻止策略** 请在打开社交登录选项后,参考
[这篇文章][auth0-login-actions-manual] 创建 Action 来设置阻止 / 允许列表。
</Callout>
[auth0-client-page]: https://manage.auth0.com/dashboard
[auth0-login-actions-manual]: https://auth0.com/blog/permit-or-deny-login-requests-using-auth0-actions/
[auth0-sso-integrations]: https://marketplace.auth0.com/features/sso-integrations

View file

@ -0,0 +1,48 @@
import { Callout } from 'nextra/components';
# Integrating with Azure OpenAI
LobeChat supports using [Azure OpenAI][azure-openai-url] as the model service provider for OpenAI. This article will explain how to configure Azure OpenAI.
## Usage Limitations
Due to development costs ([#178][rfc]), the current version of LobeChat does not fully comply with the implementation model of Azure OpenAI. Instead, it adopts a solution based on `openai` to be compatible with Azure OpenAI. As a result, the following limitations exist:
- Only one of OpenAI and Azure OpenAI can be selected. Once you enable Azure OpenAI, you will not be able to use OpenAI as the model service provider.
- LobeChat requires the deployment name to be the same as the model name in order to function properly. For example, the deployment name for the `gpt-35-turbo` model must be `gpt-35-turbo`. Otherwise, LobeChat will not be able to match the corresponding model correctly. ![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/267082091-d89d53d3-1c8c-40ca-ba15-0a9af2a79264.png)
- Due to the complexity of integrating with Azure OpenAI's SDK, it is currently not possible to query the list of configured models.
## Configuring in the Interface
Click in the bottom left corner "Actions" - "Settings", then switch to the "Language Model" tab and enable the "Azure OpenAI" switch to start using Azure OpenAI.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/267083420-422a3714-627e-4bef-9fbc-141a2a8ca916.png)
You can fill in the corresponding configuration items as needed:
- **API Key**: The API key you applied for on the Azure OpenAI account page, which can be found in the "Keys and Endpoints" section.
- **API Address**: Azure API address, which can be found in the "Keys and Endpoints" section when checking resources in the Azure portal.
- **Azure API Version**: The API version of Azure, following the format YYYY-MM-DD. Refer to the [latest version][azure-api-verion-url].
After completing the configuration of the above fields, click "Check". If it prompts "Check passed", it means the configuration was successful.
## Configuration during Deployment
If you want the deployed version to be pre-configured with Azure OpenAI for end users to use directly, you need to configure the following environment variables during deployment:
| Environment Variable | Type | Description | Default Value | Example |
| --- | --- | --- | --- | --- |
| `USE_AZURE_OPENAI` | Required | Set this value to `1` to enable Azure OpenAI configuration | - | `1` |
| `AZURE_API_KEY` | Required | This is the API key you obtained from the Azure OpenAI account page | - | `c55168be3874490ef0565d9779ecd5a6` |
| `OPENAI_PROXY_URL` | Required | Azure API address, can be found in the "Keys and Endpoints" section when checking resources in the Azure portal | - | `https://docs-test-001.openai.azure.com` |
| `AZURE_API_VERSION` | Optional | Azure API version, following the format YYYY-MM-DD | 2023-08-01-preview | `2023-05-15`, see [latest version][azure-api-verion-url] |
| `ACCESS_CODE` | Optional | Add a password to access this service. You can set a long password to prevent brute force attacks. When this value is separated by commas, it becomes an array of passwords | - | `awCT74` or `e3@09!` or `code1,code2,code3` |
<Callout>
When you enable `USE_AZURE_OPENAI` on the server, users will be unable to modify and use the
OpenAI API key in the frontend configuration.
</Callout>
[azure-api-verion-url]: https://learn.microsoft.com/zh-cn/azure/ai-services/openai/reference#chat-completions
[azure-openai-url]: https://learn.microsoft.com/zh-cn/azure/ai-services/openai/concepts/models
[rfc]: https://github.com/lobehub/lobe-chat/discussions/178

View file

@ -1,20 +1,15 @@
# 使用 Azure OpenAI 部署
import { Callout } from 'nextra/components';
# 与 Azure OpenAI 集成使用
LobeChat 支持使用 [Azure OpenAI][azure-openai-url] 作为 OpenAI 的模型服务商,本文将介绍如何配置 Azure OpenAI。
#### TOC
- [使用限制](#使用限制)
- [在界面中配置](#在界面中配置)
- [在部署时配置](#在部署时配置)
## 使用限制
从研发成本考虑 ([#178][rfc]),目前阶段的 LobeChat 并没有 100% 完全符合 Azure OpenAI 的实现模型,采用了以 `openai` 为基座,兼容 Azure OpeAI 的解决方案。因此会带来以下局限性:
- OpenAI 与 Azure OpenAI 只能二选一,当你开启使用 Azure OpenAI 后,将无法使用 OpenAI 作为模型服务商;
- LobeChat 约定了与模型同名的部署名才能正常使用,比如 `gpt-35-turbo` 模型的部署名,必须为 `gpt-35-turbo`,否则 LobeChat 将无法正常正确匹配到相应模型
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/267082091-d89d53d3-1c8c-40ca-ba15-0a9af2a79264.png)
- LobeChat 约定了与模型同名的部署名才能正常使用,比如 `gpt-35-turbo` 模型的部署名,必须为 `gpt-35-turbo`,否则 LobeChat 将无法正常正确匹配到相应模型 ![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/267082091-d89d53d3-1c8c-40ca-ba15-0a9af2a79264.png)
- 由于 Azure OpenAI 的 SDK 接入复杂度,当前无法查询配置资源的模型列表;
## 在界面中配置
@ -31,23 +26,23 @@ LobeChat 支持使用 [Azure OpenAI][azure-openai-url] 作为 OpenAI 的模型
完成上述字段配置后,点击「检查」,如果提示「检查通过」,则说明配置成功。
<br/>
<br />
## 在部署时配置
如果你希望部署的版本直接配置好 Azure OpenAI让终端用户直接使用那么你需要在部署时配置以下环境变量
| 环境变量 | 类型 | 描述 | 默认值 | 示例 |
| ------------------- | ---- | -------------------------------------------------------------------------------- | ------------------ | -------------------------------------------------- |
| `USE_AZURE_OPENAI` | 必选 | 设置改值为 `1` 开启 Azure OpenAI 配置 | - | `1` |
| `AZURE_API_KEY` | 必选 | 这是你在 Azure OpenAI 账户页面申请的 API 密钥 | - | `c55168be3874490ef0565d9779ecd5a6` |
| `OPENAI_PROXY_URL` | 必选 | Azure API 地址,从 Azure 门户检查资源时,可在 “密钥和终结点” 部分中找到此值 | - | `https://docs-test-001.openai.azure.com` |
| `AZURE_API_VERSION` | 可选 | Azure 的 API 版本,遵循 YYYY-MM-DD 格式 | 2023-08-01-preview | `2023-05-15`,查阅[最新版本][azure-api-verion-url] |
| `ACCESS_CODE` | 可选 | 添加访问此服务的密码,你可以设置一个长密码以防被爆破,该值用逗号分隔时为密码数组 | - | `awCT74` 或 `e3@09!` or `code1,code2,code3` |
| 环境变量 | 类型 | 描述 | 默认值 | 示例 |
| --- | --- | --- | --- | --- |
| `USE_AZURE_OPENAI` | 必选 | 设置改值为 `1` 开启 Azure OpenAI 配置 | - | `1` |
| `AZURE_API_KEY` | 必选 | 这是你在 Azure OpenAI 账户页面申请的 API 密钥 | - | `c55168be3874490ef0565d9779ecd5a6` |
| `OPENAI_PROXY_URL` | 必选 | Azure API 地址,从 Azure 门户检查资源时,可在 “密钥和终结点” 部分中找到此值 | - | `https://docs-test-001.openai.azure.com` |
| `AZURE_API_VERSION` | 可选 | Azure 的 API 版本,遵循 YYYY-MM-DD 格式 | 2023-08-01-preview | `2023-05-15`,查阅[最新版本][azure-api-verion-url] |
| `ACCESS_CODE` | 可选 | 添加访问此服务的密码,你可以设置一个长密码以防被爆破,该值用逗号分隔时为密码数组 | - | `awCT74` 或 `e3@09!` or `code1,code2,code3` |
> \[!NOTE]
>
> 当你在服务端开启 `USE_AZURE_OPENAI` 后,用户将无法在前端配置中修改并使用 OpenAI key。
<Callout>
当你在服务端开启 `USE_AZURE_OPENAI` 后,用户将无法在前端配置中修改并使用 OpenAI API key。
</Callout>
[azure-api-verion-url]: https://learn.microsoft.com/zh-cn/azure/ai-services/openai/reference#chat-completions
[azure-openai-url]: https://learn.microsoft.com/zh-cn/azure/ai-services/openai/concepts/models

View file

@ -0,0 +1,121 @@
import { Callout, Steps, Tabs } from 'nextra/components';
# Docker Compose Deployment Guide
[![][docker-release-shield]][docker-release-link][![][docker-size-shield]][docker-size-link][![][docker-pulls-shield]][docker-pulls-link]
We provide a [Docker image][docker-release-link] for deploying the LobeChat service on your private device.
<Steps>
### Install Docker Container Environment
(Skip this step if already installed)
<Tabs items={['Ubuntu', 'CentOS']}>
<Tabs.Tab>
```fish
$ apt install docker.io
```
</Tabs.Tab>
<Tabs.Tab>
```fish
$ yum install docker
```
</Tabs.Tab>
</Tabs>
### Run Docker Compose Deployment Command
When using `docker-compose`, the configuration file is as follows:
```yml
version: '3.8'
services:
lobe-chat:
image: lobehub/lobe-chat
container_name: lobe-chat
restart: always
ports:
- '3210:3210'
environment:
OPENAI_API_KEY: sk-xxxx
OPENAI_PROXY_URL: https://api-proxy.com/v1
ACCESS_CODE: lobe66
```
Run the following command to start the Lobe Chat service:
```bash
$ docker-compose up -d
```
### Crontab Automatic Update Script (Optional)
Similarly, you can use the following script to automatically update Lobe Chat. When using `Docker Compose`, no additional configuration of environment variables is required.
```bash
#!/bin/bash
# auto-update-lobe-chat.sh
# Set proxy (optional)
export https_proxy=http://127.0.0.1:7890 http_proxy=http://127.0.0.1:7890 all_proxy=socks5://127.0.0.1:7890
# Pull the latest image and store the output in a variable
output=$(docker pull lobehub/lobe-chat:latest 2>&1)
# Check if the pull command was executed successfully
if [ $? -ne 0 ]; then
exit 1
fi
# Check if the output contains a specific string
echo "$output" | grep -q "Image is up to date for lobehub/lobe-chat:latest"
# If the image is already up to date, do nothing
if [ $? -eq 0 ]; then
exit 0
fi
echo "Detected Lobe-Chat update"
# Remove the old container
echo "Removed: $(docker rm -f Lobe-Chat)"
# You may need to navigate to the directory where `docker-compose.yml` is located first
# cd /path/to/docker-compose-folder
# Run the new container
echo "Started: $(docker-compose up)"
# Print the update time and version
echo "Update time: $(date)"
echo "Version: $(docker inspect lobehub/lobe-chat:latest | grep 'org.opencontainers.image.version' | awk -F'"' '{print $4}')"
# Clean up unused images
docker images | grep 'lobehub/lobe-chat' | grep -v 'latest' | awk '{print $3}' | xargs -r docker rmi > /dev/null 2>&1
echo "Removed old images."
```
This script can also be used in Crontab, but ensure that your Crontab can find the correct Docker command. It is recommended to use absolute paths.
Configure Crontab to execute the script every 5 minutes:
```bash
*/5 * * * * /path/to/auto-update-lobe-chat.sh >> /path/to/auto-update-lobe-chat.log 2>&1
```
</Steps>
[docker-pulls-link]: https://hub.docker.com/r/lobehub/lobe-chat
[docker-pulls-shield]: https://img.shields.io/docker/pulls/lobehub/lobe-chat?color=45cc11&labelColor=black&style=flat-square
[docker-release-link]: https://hub.docker.com/r/lobehub/lobe-chat
[docker-release-shield]: https://img.shields.io/docker/v/lobehub/lobe-chat?color=369eff&label=docker&labelColor=black&logo=docker&logoColor=white&style=flat-square
[docker-size-link]: https://hub.docker.com/r/lobehub/lobe-chat
[docker-size-shield]: https://img.shields.io/docker/image-size/lobehub/lobe-chat?color=369eff&labelColor=black&style=flat-square

View file

@ -0,0 +1,121 @@
import { Callout, Steps, Tabs } from 'nextra/components';
# Docker Compose 部署指引
[![][docker-release-shield]][docker-release-link][![][docker-size-shield]][docker-size-link][![][docker-pulls-shield]][docker-pulls-link]
我们提供了 [Docker 镜像][docker-release-link],供你在自己的私有设备上部署 LobeChat 服务。
[docker-pulls-link]: https://hub.docker.com/r/lobehub/lobe-chat
[docker-pulls-shield]: https://img.shields.io/docker/pulls/lobehub/lobe-chat?color=45cc11&labelColor=black&style=flat-square
[docker-release-link]: https://hub.docker.com/r/lobehub/lobe-chat
[docker-release-shield]: https://img.shields.io/docker/v/lobehub/lobe-chat?color=369eff&label=docker&labelColor=black&logo=docker&logoColor=white&style=flat-square
[docker-size-link]: https://hub.docker.com/r/lobehub/lobe-chat
[docker-size-shield]: https://img.shields.io/docker/image-size/lobehub/lobe-chat?color=369eff&labelColor=black&style=flat-square
<Steps>
### 安装 Docker 容器环境
(如果已安装,请跳过此步)
<Tabs items={['Ubuntu', 'CentOS']}>
<Tabs.Tab>
```fish
$ apt install docker.io
```
</Tabs.Tab>
<Tabs.Tab>
```fish
$ yum install docker
```
</Tabs.Tab>
</Tabs>
### 运行 Docker Compose 部署指令
使用 `docker-compose` 时配置文件如下:
```yml
version: '3.8'
services:
lobe-chat:
image: lobehub/lobe-chat
container_name: lobe-chat
restart: always
ports:
- '3210:3210'
environment:
OPENAI_API_KEY: sk-xxxx
OPENAI_PROXY_URL: https://api-proxy.com/v1
ACCESS_CODE: lobe66
```
运行以下命令启动 Lobe Chat 服务:
```bash
$ docker-compose up -d
```
### Crontab 自动更新脚本(可选)
类似地,你可以使用以下脚本来自动更新 Lobe Chat使用 `Docker Compose` 时,环境变量无需额外配置。
```bash
#!/bin/bash
# auto-update-lobe-chat.sh
# 设置代理(可选)
export https_proxy=http://127.0.0.1:7890 http_proxy=http://127.0.0.1:7890 all_proxy=socks5://127.0.0.1:7890
# 拉取最新的镜像并将输出存储在变量中
output=$(docker pull lobehub/lobe-chat:latest 2>&1)
# 检查拉取命令是否成功执行
if [ $? -ne 0 ]; then
exit 1
fi
# 检查输出中是否包含特定的字符串
echo "$output" | grep -q "Image is up to date for lobehub/lobe-chat:latest"
# 如果镜像已经是最新的,则不执行任何操作
if [ $? -eq 0 ]; then
exit 0
fi
echo "Detected Lobe-Chat update"
# 删除旧的容器
echo "Removed: $(docker rm -f Lobe-Chat)"
# 也许需要先进入 `docker-compose.yml` 所在的目录
# cd /path/to/docker-compose-folder
# 运行新的容器
echo "Started: $(docker-compose up)"
# 打印更新的时间和版本
echo "Update time: $(date)"
echo "Version: $(docker inspect lobehub/lobe-chat:latest | grep 'org.opencontainers.image.version' | awk -F'"' '{print $4}')"
# 清理不再使用的镜像
docker images | grep 'lobehub/lobe-chat' | grep -v 'latest' | awk '{print $3}' | xargs -r docker rmi > /dev/null 2>&1
echo "Removed old images."
```
此脚本亦可以在 Crontab 中使用,但请确认你的 Crontab 可以找到正确的 Docker 命令。建议使用绝对路径。
配置 Crontab每 5 分钟执行一次脚本:
```bash
*/5 * * * * /path/to/auto-update-lobe-chat.sh >> /path/to/auto-update-lobe-chat.log 2>&1
```
</Steps>

View file

@ -0,0 +1,152 @@
import { Callout, Steps, Tabs } from 'nextra/components';
# Docker Deployment Guide
[![][docker-release-shield]][docker-release-link][![][docker-size-shield]][docker-size-link][![][docker-pulls-shield]][docker-pulls-link]
We provide a [Docker image][docker-release-link] for you to deploy the LobeChat service on your private device.
<Steps>
### Install Docker Container Environment
(If already installed, skip this step)
<Tabs items={['Ubuntu', 'CentOS']}>
<Tabs.Tab>
```fish
$ apt install docker.io
```
</Tabs.Tab>
<Tabs.Tab>
```fish
$ yum install docker
```
</Tabs.Tab>
</Tabs>
### Docker Command Deployment
Use the following command to start the LobeChat service with one click:
```fish
$ docker run -d -p 3210:3210 \
-e OPENAI_API_KEY=sk-xxxx \
-e ACCESS_CODE=lobe66 \
--name lobe-chat \
lobehub/lobe-chat
```
Command explanation:
- The default port mapping is `3210`, please ensure it is not occupied or manually change the port mapping.
- Replace `sk-xxxx` in the above command with your OpenAI API Key.
- For the complete list of environment variables supported by LobeChat, please refer to the [Environment Variables](/zh/self-hosting/environment-ariable) section.
<Callout>
Since the official Docker image build takes about half an hour, if you see the "update available"
prompt after deployment, you can wait for the image to finish building before deploying again.
</Callout>
<Callout type="warning">
The official Docker image does not have a password set. It is strongly recommended to add a
password to enhance security, otherwise you may encounter situations like [My API Key was
stolen!!!](https://github.com/lobehub/lobe-chat/issues/1123).
</Callout>
<Callout type="info">
Note that when the **deployment architecture is inconsistent with the image**, you need to
cross-compile **Sharp**, see [Sharp
Cross-Compilation](https://sharp.pixelplumbing.com/install#cross-platform) for details.
</Callout>
#### Using a Proxy Address
If you need to use the OpenAI service through a proxy, you can configure the proxy address using the `OPENAI_PROXY_URL` environment variable:
```fish
$ docker run -d -p 3210:3210 \
-e OPENAI_API_KEY=sk-xxxx \
-e OPENAI_PROXY_URL=https://api-proxy.com/v1 \
-e ACCESS_CODE=lobe66 \
--name lobe-chat \
lobehub/lobe-chat
```
### Crontab Automatic Update Script (Optional)
If you want to automatically obtain the latest image, you can follow these steps.
First, create a `lobe.env` configuration file with various environment variables, for example:
```env
OPENAI_API_KEY=sk-xxxx
OPENAI_PROXY_URL=https://api-proxy.com/v1
ACCESS_CODE=arthals2333
CUSTOM_MODELS=-gpt-4,-gpt-4-32k,-gpt-3.5-turbo-16k,gpt-3.5-turbo-1106=gpt-3.5-turbo-16k,gpt-4-0125-preview=gpt-4-turbo,gpt-4-vision-preview=gpt-4-vision
```
Then, you can use the following script to automate the update:
```bash
#!/bin/bash
# auto-update-lobe-chat.sh
# Set up proxy (optional)
export https_proxy=http://127.0.0.1:7890 http_proxy=http://127.0.0.1:7890 all_proxy=socks5://127.0.0.1:7890
# Pull the latest image and store the output in a variable
output=$(docker pull lobehub/lobe-chat:latest 2>&1)
# Check if the pull command was executed successfully
if [ $? -ne 0 ]; then
exit 1
fi
# Check if the output contains a specific string
echo "$output" | grep -q "Image is up to date for lobehub/lobe-chat:latest"
# If the image is already up to date, do nothing
if [ $? -eq 0 ]; then
exit 0
fi
echo "Detected Lobe-Chat update"
# Remove the old container
echo "Removed: $(docker rm -f Lobe-Chat)"
# Run the new container
echo "Started: $(docker run -d --network=host --env-file /path/to/lobe.env --name=Lobe-Chat --restart=always lobehub/lobe-chat)"
# Print the update time and version
echo "Update time: $(date)"
echo "Version: $(docker inspect lobehub/lobe-chat:latest | grep 'org.opencontainers.image.version' | awk -F'"' '{print $4}')"
# Clean up unused images
docker images | grep 'lobehub/lobe-chat' | grep -v 'latest' | awk '{print $3}' | xargs -r docker rmi > /dev/null 2>&1
echo "Removed old images."
```
This script can be used in Crontab, but please ensure that your Crontab can find the correct Docker command. It is recommended to use absolute paths.
Configure Crontab to execute the script every 5 minutes:
```bash
*/5 * * * * /path/to/auto-update-lobe-chat.sh >> /path/to/auto-update-lobe-chat.log 2>&1
```
[docker-pulls-link]: https://hub.docker.com/r/lobehub/lobe-chat
[docker-pulls-shield]: https://img.shields.io/docker/pulls/lobehub/lobe-chat?color=45cc11&labelColor=black&style=flat-square
[docker-release-link]: https://hub.docker.com/r/lobehub/lobe-chat
[docker-release-shield]: https://img.shields.io/docker/v/lobehub/lobe-chat?color=369eff&label=docker&labelColor=black&logo=docker&logoColor=white&style=flat-square
[docker-size-link]: https://hub.docker.com/r/lobehub/lobe-chat
[docker-size-shield]: https://img.shields.io/docker/image-size/lobehub/lobe-chat?color=369eff&labelColor=black&style=flat-square
</Steps>

View file

@ -0,0 +1,149 @@
import { Callout, Steps, Tabs } from 'nextra/components';
# Docker 部署指引
[![][docker-release-shield]][docker-release-link][![][docker-size-shield]][docker-size-link][![][docker-pulls-shield]][docker-pulls-link]
我们提供了 [Docker 镜像][docker-release-link],供你在自己的私有设备上部署 LobeChat 服务。
[docker-pulls-link]: https://hub.docker.com/r/lobehub/lobe-chat
[docker-pulls-shield]: https://img.shields.io/docker/pulls/lobehub/lobe-chat?color=45cc11&labelColor=black&style=flat-square
[docker-release-link]: https://hub.docker.com/r/lobehub/lobe-chat
[docker-release-shield]: https://img.shields.io/docker/v/lobehub/lobe-chat?color=369eff&label=docker&labelColor=black&logo=docker&logoColor=white&style=flat-square
[docker-size-link]: https://hub.docker.com/r/lobehub/lobe-chat
[docker-size-shield]: https://img.shields.io/docker/image-size/lobehub/lobe-chat?color=369eff&labelColor=black&style=flat-square
<Steps>
### 安装 Docker 容器环境
(如果已安装,请跳过此步)
<Tabs items={['Ubuntu', 'CentOS']}>
<Tabs.Tab>
```fish
$ apt install docker.io
```
</Tabs.Tab>
<Tabs.Tab>
```fish
$ yum install docker
```
</Tabs.Tab>
</Tabs>
### Docker 指令部署
使用以下命令即可使用一键启动 LobeChat 服务:
```fish
$ docker run -d -p 3210:3210 \
-e OPENAI_API_KEY=sk-xxxx \
-e ACCESS_CODE=lobe66 \
--name lobe-chat \
lobehub/lobe-chat
```
指令说明:
- 默认映射端口为 `3210`, 请确保未被占用或手动更改端口映射
- 使用你的 OpenAI API Key 替换上述命令中的 `sk-xxxx`
- LobeChat 支持的完整环境变量列表请参考 [环境变量](/zh/self-hosting/environment-ariable) 部分
<Callout>
由于官方的 Docker
镜像构建大约需要半小时左右,如果在更新部署后会出现「存在更新」的提示,可以等待镜像构建完成后再次部署。
</Callout>
<Callout type="warning">
官方 Docker 镜像中未设定密码,强烈建议添加密码以提升安全性,否则你可能会遇到 [My API Key was
stolen!!!](https://github.com/lobehub/lobe-chat/issues/1123) 这样的情况
</Callout>
<Callout type="info">
注意,当**部署架构与镜像的不一致时**,需要对 **Sharp** 进行交叉编译,详见 [Sharp
交叉编译](https://sharp.pixelplumbing.com/install#cross-platform)
</Callout>
#### 使用代理地址
如果你需要通过代理使用 OpenAI 服务,你可以使用 `OPENAI_PROXY_URL` 环境变量来配置代理地址:
```fish
$ docker run -d -p 3210:3210 \
-e OPENAI_API_KEY=sk-xxxx \
-e OPENAI_PROXY_URL=https://api-proxy.com/v1 \
-e ACCESS_CODE=lobe66 \
--name lobe-chat \
lobehub/lobe-chat
```
### Crontab 自动更新脚本(可选)
如果你想自动获得最新的镜像,你可以如下操作。
首先,新建一个 `lobe.env` 配置文件,内容为各种环境变量,例如:
```env
OPENAI_API_KEY=sk-xxxx
OPENAI_PROXY_URL=https://api-proxy.com/v1
ACCESS_CODE=arthals2333
CUSTOM_MODELS=-gpt-4,-gpt-4-32k,-gpt-3.5-turbo-16k,gpt-3.5-turbo-1106=gpt-3.5-turbo-16k,gpt-4-0125-preview=gpt-4-turbo,gpt-4-vision-preview=gpt-4-vision
```
然后,你可以使用以下脚本来自动更新:
```bash
#!/bin/bash
# auto-update-lobe-chat.sh
# 设置代理(可选)
export https_proxy=http://127.0.0.1:7890 http_proxy=http://127.0.0.1:7890 all_proxy=socks5://127.0.0.1:7890
# 拉取最新的镜像并将输出存储在变量中
output=$(docker pull lobehub/lobe-chat:latest 2>&1)
# 检查拉取命令是否成功执行
if [ $? -ne 0 ]; then
exit 1
fi
# 检查输出中是否包含特定的字符串
echo "$output" | grep -q "Image is up to date for lobehub/lobe-chat:latest"
# 如果镜像已经是最新的,则不执行任何操作
if [ $? -eq 0 ]; then
exit 0
fi
echo "Detected Lobe-Chat update"
# 删除旧的容器
echo "Removed: $(docker rm -f Lobe-Chat)"
# 运行新的容器
echo "Started: $(docker run -d --network=host --env-file /path/to/lobe.env --name=Lobe-Chat --restart=always lobehub/lobe-chat)"
# 打印更新的时间和版本
echo "Update time: $(date)"
echo "Version: $(docker inspect lobehub/lobe-chat:latest | grep 'org.opencontainers.image.version' | awk -F'"' '{print $4}')"
# 清理不再使用的镜像
docker images | grep 'lobehub/lobe-chat' | grep -v 'latest' | awk '{print $3}' | xargs -r docker rmi > /dev/null 2>&1
echo "Removed old images."
```
此脚本可以在 Crontab 中使用,但请确认你的 Crontab 可以找到正确的 Docker 命令。建议使用绝对路径。
配置 Crontab每 5 分钟执行一次脚本:
```bash
*/5 * * * * /path/to/auto-update-lobe-chat.sh >> /path/to/auto-update-lobe-chat.log 2>&1
```
</Steps>

View file

@ -0,0 +1,340 @@
```markdown
import { Callout } from 'nextra/components';
# Environment Variables
LobeChat provides some additional configuration options during deployment, which can be customized using environment variables.
## Common Variables
### `ACCESS_CODE`
- Type: Optional
- Description: Add a password to access the LobeChat service. You can set a long password to prevent brute force attacks.
- Default: -
- Example: `awCTe)re_r74` or `rtrt_ewee3@09!`
### `ENABLE_OAUTH_SSO`
- Type: Optional
- Description: Enable Single Sign-On (SSO) for LobeChat. Set to `1` to enable SSO. For more information, see [Authentication Services](#authentication-services).
- Default: -
- Example: `1`
### `NEXT_PUBLIC_BASE_PATH`
- Type: Optional
- Description: Add a `basePath` for LobeChat.
- Default: -
- Example: `/test`
### `DEFAULT_AGENT_CONFIG`
- Type: Optional
- Description: Used to configure the default settings for the LobeChat default assistant. It supports various data types and structures, including key-value pairs, nested fields, array values, etc.
- Default: -
- Example: `'model=gpt-4-1106-preview;params.max_tokens=300;plugins=search-engine,lobe-image-designer`
`DEFAULT_AGENT_CONFIG` is used to configure the default settings for the LobeChat default assistant. It supports various data types and structures, including key-value pairs, nested fields, array values, etc. The table below provides detailed information on the configuration options, examples, and corresponding explanations for the `DEFAULT_AGENT_CONFIG` environment variable:
| Configuration Type | Example | Explanation |
| ------------------- | -------------------------------------------- | -------------------------------------------------------- |
| Basic Key-Value Pair| `model=gpt-4` | Set the model to `gpt-4`. |
| Nested Field | `tts.sttLocale=en-US` | Set the language locale for the text-to-speech service to `en-US`. |
| Array | `plugins=search-engine,lobe-image-designer` | Enable the `search-engine` and `lobe-image-designer` plugins. |
| Chinese Comma | `plugins=search-enginelobe-image-designer` | Same as above, demonstrating support for Chinese comma separation. |
| Multiple Configurations | `model=glm-4;provider=zhipu` | Set the model to `glm-4` and the model provider to `zhipu`. |
| Numeric Value | `params.max_tokens=300` | Set the maximum tokens to `300`. |
| Boolean Value | `enableAutoCreateTopic=true` | Enable automatic topic creation. |
| Special Characters | `inputTemplate="Hello; I am a bot;"` | Set the input template to `Hello; I am a bot;`. |
| Error Handling | `model=gpt-4;maxToken` | Ignore invalid entry `maxToken` and only parse `model=gpt-4`. |
| Value Override | `model=gpt-4;model=gpt-4-1106-preview` | If the key is repeated, use the value that appears last, in this case, the value of `model` is `gpt-4-1106-preview`. |
Further Reading:
- [\[RFC\] 022 - Environment Variable Configuration for Default Assistant Parameters](https://github.com/lobehub/lobe-chat/discussions/913)
```
## Identity Verification Service
### General Settings
#### `ENABLE_OAUTH_SSO`
- Type: Required
- Description: Enable Single Sign-On (SSO) for LobeChat. Set to `1` to enable single sign-on.
- Default: `-`
- Example: `1`
#### `NEXTAUTH_SECRET`
- Type: Required
- Description: Key used to encrypt the session tokens in Auth.js. You can generate the key using the following command: `openssl rand -base64 32`.
- Default: `-`
- Example: `Tfhi2t2pelSMEA8eaV61KaqPNEndFFdMIxDaJnS1CUI=`
### Auth0
<Callout>
Currently, we only support the Auth0 identity verification service provider. If you need to use another identity verification service provider, you can submit a [feature request](https://github.com/lobehub/lobe-chat/issues/new/choose) or a pull request.
</Callout>
#### `AUTH0_CLIENT_ID`
- Type: Required
- Description: Client ID of the Auth0 application. You can access it [here][auth0-client-page] and navigate to the application settings to view.
- Default: `-`
- Example: `evCnOJP1UX8FMnXR9Xkj5t0NyFn5p70P`
#### `AUTH0_CLIENT_SECRET`
- Type: Required
- Description: Client Secret of the Auth0 application.
- Default: `-`
- Example: `wnX7UbZg85ZUzF6ioxPLnJVEQa1Elbs7aqBUSF16xleBS5AdkVfASS49-fQIC8Rm`
#### `AUTH0_ISSUER`
- Type: Required
- Description: Issuer/domain of the Auth0 application.
- Default: `-`
- Example: `https://example.auth0.com`
## Model Service Provider
### OpenAI
#### `OPENAI_API_KEY`
- Type: Required
- Description: This is the API key you applied for on the OpenAI account page, you can go to [here][openai-api-page] to view
- Default: -
- Example: `sk-xxxxxx...xxxxxx`
#### `OPENAI_PROXY_URL`
- Type: Optional
- Description: If you manually configure the OpenAI interface proxy, you can use this configuration item to override the default OpenAI API request base URL
- Default: `https://api.openai.com/v1`
- Example: `https://api.chatanywhere.cn` or `https://aihubmix.com/v1`
\<Callout type={'warning'}>
Please check the request suffix of your proxy service provider. Some proxy service providers may add `/v1` to the request suffix, while others may not. If you find that the AI returns an empty message during testing, try adding the `/v1` suffix and retry. </Callout>
\<Callout type={'info'}>
Whether to fill in `/v1` is closely related to the model service provider. For example, the default address of openai is `api.openai.com/v1`. If your proxy forwards the `/v1` interface, you can simply fill in `proxy.com`. However, if the model service provider directly forwards the `api.openai.com` domain, then you need to add `/v1` to the URL yourself. </Callout>
Related discussions:
- [Why is the return value blank after installing Docker, configuring the environment variables?](https://github.com/lobehub/lobe-chat/discussions/623)
- [Reasons for errors using third-party interfaces](https://github.com/lobehub/lobe-chat/discussions/734)
- [No response when the proxy server address is filled in for chatting](https://github.com/lobehub/lobe-chat/discussions/1065)
#### `CUSTOM_MODELS`
- Type: Optional
- Description: Used to control the model list, use `+` to add a model, use `-` to hide a model, use `model name=display name` to customize the display name of a model, separated by English commas.
- Default: `-`
- Example: `+qwen-7b-chat,+glm-6b,-gpt-3.5-turbo,gpt-4-0125-preview=gpt-4-turbo`
The above example adds `qwen-7b-chat` and `glm-6b` to the model list, removes `gpt-3.5-turbo` from the list, and displays the model name of `gpt-4-0125-preview` as `gpt-4-turbo`. If you want to disable all models first and then enable specific models, you can use `-all,+gpt-3.5-turbo`, which means only `gpt-3.5-turbo` is enabled.
You can find all current model names in [modelProviders](https://github.com/lobehub/lobe-chat/tree/main/src/config/modelProviders).
### Azure OpenAI
If you need to use Azure OpenAI to provide model services, you can refer to the [Deploying with Azure OpenAI](../Deployment/Deploy-with-Azure-OpenAI.en-US.md) section for detailed steps. Here, we will list the environment variables related to Azure OpenAI.
#### `USE_AZURE_OPENAI`
- Type: Optional
- Description: Set this value to `1` to enable Azure OpenAI configuration
- Default: -
- Example: `1`
#### `AZURE_API_KEY`
- Type: Optional
- Description: This is the API key you applied for on the Azure OpenAI account page
- Default: -
- Example: `c55168be3874490ef0565d9779ecd5a6`
#### `AZURE_API_VERSION`
- Type: Optional
- Description: The API version of Azure, following the format YYYY-MM-DD
- Default: `2023-08-01-preview`
- Example: `2023-05-15`, refer to the [latest version][azure-api-verion-url]
### ZHIPU AI
#### `ZHIPU_API_KEY`
- Type: Required
- Description: This is the API key you applied for in the ZHIPU AI service
- Default: -
- Example: `4582d332441a313f5c2ed9824d1798ca.rC8EcTAhgbOuAuVT`
### Moonshot AI
#### `MOONSHOT_API_KEY`
- Type: Required
- Description: This is the API key you applied for in the Moonshot AI service
- Default: -
- Example: `Y2xpdGhpMzNhZXNoYjVtdnZjMWc6bXNrLWIxQlk3aDNPaXpBWnc0V1RaMDhSRmRFVlpZUWY=`
### Google AI
#### `GOOGLE_API_KEY`
- Type: Required
- Description: This is the API key you applied for in the Google AI Platform to access Google AI services
- Default: -
- Example: `AIraDyDwcw254kwJaGjI9wwaHcdDCS__Vt3xQE`
### AWS Bedrock
#### `AWS_ACCESS_KEY_ID`
- Type: Required
- Description: Access key ID for AWS service authentication
- Default: -
- Example: `AKIA5STVRLFSB4S9HWBR`
#### `AWS_SECRET_ACCESS_KEY`
- Type: Required
- Description: Secret key for AWS service authentication
- Default: -
- Example: `Th3vXxLYpuKcv2BARktPSTPxx+jbSiFT6/0w7oEC`
#### `AWS_REGION`
- Type: Optional
- Description: Region setting for AWS services
- Default: `us-east-1`
- Example: `us-east-1`
### Ollama
#### `OLLAMA_PROXY_URL`
- Type: Optional
- Description: Used to enable the Ollama service, setting this will display optional open-source language models in the language model list and can also specify custom language models
- Default: -
- Example: `http://127.0.0.1:11434/v1`
## Plugin Service
### `PLUGINS_INDEX_URL`
- Type: Optional
- Description: The index address of the LobeChat plugin market. If you have deployed the plugin market service on your own, you can use this variable to override the default plugin market address.
- Default Value: `https://chat-plugins.lobehub.com`
### `PLUGIN_SETTINGS`
- Type: Optional
- Description: Used to configure plugin settings. Use the format `plugin_name:setting_field=setting_value` to configure the plugin settings. Multiple setting fields are separated by an English semicolon `;`, and multiple plugin settings are separated by an English comma `,`.
- Default Value: `-`
- Example: `search-engine:SERPAPI_API_KEY=xxxxx,plugin-2:key1=value1;key2=value2`
The above example sets the `SERPAPI_API_KEY` of the `search-engine` plugin to `xxxxx`, and sets `key1` of `plugin-2` to `value1`, and `key2` to `value2`. The generated plugin settings configuration is as follows:
```json
{
"plugin-2": {
"key1": "value1",
"key2": "value2"
},
"search-engine": {
"SERPAPI_API_KEY": "xxxxx"
}
}
```
## Assistant Market
### `AGENTS_INDEX_URL`
- Type: Optional
- Description: The index address of the LobeChat assistant market. If you have deployed the assistant market service on your own, you can use this variable to override the default market address.
- Default Value: `https://chat-agents.lobehub.com`
## Data Analytics
### Vercel Analytics
#### `NEXT_PUBLIC_ANALYTICS_VERCEL`
- Type: Optional
- Description: Used to configure the environment variables for Vercel Analytics. Set to `1` to enable Vercel Analytics.
- Default Value: `-`
- Example: `1`
#### `NEXT_PUBLIC_VERCEL_DEBUG`
- Type: Optional
- Description: Used to enable the debug mode for Vercel Analytics.
- Default Value: `-`
- Example: `1`
### Posthog Analytics
#### `NEXT_PUBLIC_ANALYTICS_POSTHOG`
- Type: Optional
- Description: Used to enable the environment variables for [PostHog Analytics][posthog-analytics-url]. Set to `1` to enable PostHog Analytics.
- Default Value: `-`
- Example: `1`
#### `NEXT_PUBLIC_POSTHOG_KEY`
- Type: Optional
- Description: Set the PostHog project key.
- Default Value: `-`
- Example: `phc_xxxxxxxx`
#### `NEXT_PUBLIC_POSTHOG_HOST`
- Type: Optional
- Description: Set the deployment address of the PostHog service, defaulting to the official SAAS address.
- Default Value: `https://app.posthog.com`
- Example: `https://example.com`
#### `NEXT_PUBLIC_POSTHOG_DEBUG`
- Type: Optional
- Description: Enable the debug mode for PostHog.
- Default Value: `-`
- Example: `1`
### Umami Analytics
#### `NEXT_PUBLIC_ANALYTICS_UMAMI`
- Type: Optional
- Description: Used to enable the environment variables for [Umami Analytics][umami-analytics-url]. Set to `1` to enable Umami Analytics.
- Default Value: `-`
- Example: `1`
#### `NEXT_PUBLIC_UMAMI_SCRIPT_URL`
- Type: Optional
- Description: The URL of the Umami script, defaulting to the script URL provided by Umami Cloud.
- Default Value: `https://analytics.umami.is/script.js`
- Example: `https://umami.your-site.com/script.js`
#### `NEXT_PUBLIC_UMAMI_WEBSITE_ID`
- Type: Required
- Description: Your Umami Website ID.
- Default Value: `-`
- Example: `E738D82A-EE9E-4806-A81F-0CA3CAE57F65`
[auth0-client-page]: https://manage.auth0.com/dashboard
[azure-api-verion-url]: https://docs.microsoft.com/zh-cn/azure/developer/javascript/api-reference/es-modules/azure-sdk/ai-translation/translationconfiguration?view=azure-node-latest#api-version
[openai-api-page]: https://platform.openai.com/account/api-keys
[posthog-analytics-url]: https://posthog.com
[umami-analytics-url]: https://umami.is

View file

@ -1,33 +1,8 @@
import { Callout } from 'nextra/components';
# 环境变量
LobeChat 在部署时提供了一些额外的配置项,使用环境变量进行设置
#### TOC
- [通用变量](#通用变量)
- [`ACCESS_CODE`](#access_code)
- [`ENABLE_OAUTH_SSO`](#enable_oauth_sso)
- [`NEXT_PUBLIC_BASE_PATH`](#next_public_base_path)
- [身份验证服务](#身份验证服务)
- [通用设置](#通用设置)
- [Auth0](#auth0)
- [模型服务商](#模型服务商)
- [OpenAI](#openai)
- [Azure OpenAI](#azure-openai)
- [智谱 AI](#智谱-ai)
- [Moonshot AI](#moonshot-ai)
- [Google AI](#google-ai)
- [AWS Bedrock](#aws-bedrock)
- [Ollama](#ollama)
- [插件服务](#插件服务)
- [`PLUGINS_INDEX_URL`](#plugins_index_url)
- [`PLUGIN_SETTINGS`](#plugin_settings)
- [角色服务](#角色服务)
- [`AGENTS_INDEX_URL`](#agents_index_url)
- [数据统计](#数据统计)
- [Vercel Analytics](#vercel-analytics)
- [Posthog Analytics](#posthog-analytics)
- [Umami Analytics](#umami-analytics)
LobeChat 在部署时提供了一些额外的配置项,你可以使用环境变量进行自定义设置。
## 通用变量
@ -61,18 +36,18 @@ LobeChat 在部署时提供了一些额外的配置项,使用环境变量进
`DEFAULT_AGENT_CONFIG` 用于配置 LobeChat 默认助理的默认配置。它支持多种数据类型和结构,包括键值对、嵌套字段、数组值等。下表详细说明了 `DEFAULT_AGENT_CONFIG` 环境变量的配置项、示例以及相应解释:
| 配置项类型 | 示例 | 解释 |
| ---------- | -------------------------------------------- | ---------------------------------------------------------------------------- |
| 基本键值对 | `model=gpt-4` | 设置模型为 `gpt-4`。 |
| 嵌套字段 | `tts.sttLocale=en-US` | 设置文本到语音服务的语言区域为 `en-US`。 |
| 数组 | `plugins=search-engine,lobe-image-designer` | 启用 `search-engine` 和 `lobe-image-designer` 插件。 |
| 中文逗号 | `plugins=search-enginelobe-image-designer` | 同上,演示支持中文逗号分隔。 |
| 多个配置项 | `model=glm-4;provider=zhipu` | 设置模型为 `glm-4` 且模型服务商为 `zhipu`。 |
| 数字值 | `params.max_tokens=300` | 设置最大令牌数为 `300`。 |
| 布尔值 | `enableAutoCreateTopic=true` | 启用自动创建主题。 |
| 特殊字符 | `inputTemplate="Hello; I am a bot;"` | 设置输入模板为 `Hello; I am a bot;`。 |
| 错误处理 | `model=gpt-4;maxToken` | 忽略无效条目 `maxToken`,仅解析出 `model=gpt-4`。 |
| 值覆盖 | `model=gpt-4;model=gpt-4-1106-preview` | 如果键重复,使用最后一次出现的值,此处 `model` 的值为 `gpt-4-1106-preview`。 |
| 配置项类型 | 示例 | 解释 |
| --- | --- | --- |
| 基本键值对 | `model=gpt-4` | 设置模型为 `gpt-4`。 |
| 嵌套字段 | `tts.sttLocale=en-US` | 设置文本到语音服务的语言区域为 `en-US`。 |
| 数组 | `plugins=search-engine,lobe-image-designer` | 启用 `search-engine` 和 `lobe-image-designer` 插件。 |
| 中文逗号 | `plugins=search-enginelobe-image-designer` | 同上,演示支持中文逗号分隔。 |
| 多个配置项 | `model=glm-4;provider=zhipu` | 设置模型为 `glm-4` 且模型服务商为 `zhipu`。 |
| 数字值 | `params.max_tokens=300` | 设置最大令牌数为 `300`。 |
| 布尔值 | `enableAutoCreateTopic=true` | 启用自动创建主题。 |
| 特殊字符 | `inputTemplate="Hello; I am a bot;"` | 设置输入模板为 `Hello; I am a bot;`。 |
| 错误处理 | `model=gpt-4;maxToken` | 忽略无效条目 `maxToken`,仅解析出 `model=gpt-4`。 |
| 值覆盖 | `model=gpt-4;model=gpt-4-1106-preview` | 如果键重复,使用最后一次出现的值,此处 `model` 的值为 `gpt-4-1106-preview`。 |
相关阅读:
@ -82,36 +57,44 @@ LobeChat 在部署时提供了一些额外的配置项,使用环境变量进
### 通用设置
#### `ENABLE_OAUTH_SSO`
- 类型:必选
- 描述:为 LobeChat 启用单点登录 (SSO)。设置为 `1` 以启用单点登录。
- 默认值: `-`
- 示例: `1`
#### `NEXTAUTH_SECRET`
- 类型:必须
- 类型:必
- 描述:用于加密 Auth.js 会话令牌的密钥。您可以使用以下命令生成秘钥: `openssl rand -base64 32`.
- 默认值: `-`
- 示例: `Tfhi2t2pelSMEA8eaV61KaqPNEndFFdMIxDaJnS1CUI=`
### Auth0
> \[!NOTE] 注意事项:
>
> 目前我们只支持 Auth0 身份验证服务提供商。如果您需要使用其他身份验证服务提供商,可以提交功能请求或 Pull Request。
<Callout>
目前我们只支持 Auth0 身份验证服务提供商。如果您需要使用其他身份验证服务提供商,可以提交
[功能请求](https://github.com/lobehub/lobe-chat/issues/new/choose) 或 Pull Request。
</Callout>
#### `AUTH0_CLIENT_ID`
- 类型:必
- 类型:必
- 描述: Auth0 应用程序的 Client ID您可以访问[这里][auth0-client-page]并导航至应用程序设置来查看
- 默认值: `-`
- 示例: `evCnOJP1UX8FMnXR9Xkj5t0NyFn5p70P`
#### `AUTH0_CLIENT_SECRET`
- 类型:必
- 类型:必
- 描述: Auth0 应用程序的 Client Secret
- 默认值: `-`
- 示例: `wnX7UbZg85ZUzF6ioxPLnJVEQa1Elbs7aqBUSF16xleBS5AdkVfASS49-fQIC8Rm`
#### `AUTH0_ISSUER`
- 类型:必
- 类型:必
- 描述: Auth0 应用程序的签发人 / 域
- 默认值: `-`
- 示例: `https://example.auth0.com`
@ -134,12 +117,16 @@ LobeChat 在部署时提供了一些额外的配置项,使用环境变量进
- 默认值:`https://api.openai.com/v1`
- 示例:`https://api.chatanywhere.cn` 或 `https://aihubmix.com/v1`
> \[!NOTE] 注意事项:
>
> 请检查你的代理服务商的请求后缀,有的代理服务商会在请求后缀添加 `/v1`,有的则不会
> 如果你在测试时发现 AI 返回的消息为空,请尝试添加 `/v1` 后缀后重试。
<Callout type={'warning'}>
请检查你的代理服务商的请求后缀,有的代理服务商会在请求后缀添加
`/v1`,有的则不会。如果你在测试时发现 AI 返回的消息为空,请尝试添加 `/v1` 后缀后重试
</Callout>
是否填写 `/v1` 跟模型服务商有很大关系,比如 openai 的默认地址是 `api.openai.com/v1` 。如果你的代理上是转发了 `/v1` 这个接口,那么直接填 `proxy.com` 即可。 但如果模型服务商是直接转发了 `api.openai.com` 域名,那么你就要自己加上 `/v1` 这个 url。
<Callout type={'info'}>
是否填写 `/v1` 跟模型服务商有很大关系,比如 openai 的默认地址是 `api.openai.com/v1`
。如果你的代理商转发了 `/v1` 这个接口,那么直接填 `proxy.com` 即可。
但如果模型服务商是直接转发了 `api.openai.com` 域名,那么你就要自己加上 `/v1` 这个 url。
</Callout>
相关讨论:
@ -160,7 +147,7 @@ LobeChat 在部署时提供了一些额外的配置项,使用环境变量进
### Azure OpenAI
如果你需要使用 Azure OpenAI 来提供模型服务,可以查阅 [使用 Azure OpenAI 部署](Deploy-with-Azure-OpenAI.zh-CN.md) 章节查看详细步骤,这里将列举和 Azure OpenAI 相关的环境变量。
如果你需要使用 Azure OpenAI 来提供模型服务,可以查阅 [使用 Azure OpenAI 部署](../Deployment/Deploy-with-Azure-OpenAI.zh-CN.md) 章节查看详细步骤,这里将列举和 Azure OpenAI 相关的环境变量。
#### `USE_AZURE_OPENAI`
@ -271,12 +258,12 @@ LobeChat 在部署时提供了一些额外的配置项,使用环境变量进
}
```
## 角色服务
## 助手市场
### `AGENTS_INDEX_URL`
- 类型:可选
- 描述LobeChat 角色市场的索引地址,如果你自行部署了角色市场的服务,可以使用该变量来覆盖默认的插件市场地址
- 描述LobeChat 助手市场的索引地址,如果你自行部署了助手市场的服务,可以使用该变量来覆盖默认的市场地址
- 默认值:`https://chat-agents.lobehub.com`
## 数据统计
@ -332,8 +319,7 @@ LobeChat 在部署时提供了一些额外的配置项,使用环境变量进
#### `NEXT_PUBLIC_ANALYTICS_UMAMI`
- 类型:可选
- 描述:用于开启 [Umami Analytics][umami-analytics-url] 的环境变量,设为 `1`
时开启 Umami Analytics
- 描述:用于开启 [Umami Analytics][umami-analytics-url] 的环境变量,设为 `1` 时开启 Umami Analytics
- 默认值: `-`
- 示例:`1`

View file

@ -0,0 +1,66 @@
import { Callout, Steps } from 'nextra/components';
## `A` Vercel / Zeabur Deployment
If you deployed your project according to the one-click deployment steps in the README, you may have noticed that you are always prompted with "updates available." This is because Vercel defaults to creating a new project for you instead of forking the original project, which prevents accurate update detection. We recommend following these steps to redeploy:
- Delete the original repository;
- Use the <kbd>Fork</kbd> button in the top right corner of the page to fork the original project;
- Redeploy on `Vercel`.
### Enable Automatic Updates
<Callout>
If you encounter an error when executing `Upstream Sync`, please try executing it manually again
</Callout>
After forking the project, due to Github's limitations, you need to manually enable Workflows on the Actions page of your forked project and start the Upstream Sync Action. Once enabled, you can set up automatic updates to occur every hour.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/266985117-4d48fe7b-0412-4667-8129-b25ebcf2c9de.png) ![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/266985177-7677b4ce-c348-4145-9f60-829d448d5be6.png)
## `B` Docker Deployment
Upgrading the Docker deployment version is very simple, you just need to redeploy the latest LobeChat image. Here are the commands required to perform these steps:
<Steps>
### Stop and Remove the Current Running LobeChat Container
Assuming the LobeChat container is named `lobe-chat`, use the following commands to stop and remove the currently running LobeChat container:
```fish
docker stop lobe-chat
docker rm lobe-chat
```
### Pull the Latest LobeChat Image
Use the following command to pull the latest Docker image for LobeChat:
```fish
docker pull lobehub/lobe-chat
```
### Restart the Docker Container
Redeploy the LobeChat container using the newly pulled image:
```fish
docker run -d -p 3210:3210 \
-e OPENAI_API_KEY=sk-xxxx \
-e OPENAI_PROXY_URL=https://api-proxy.com/v1 \
-e ACCESS_CODE=lobe66 \
--name lobe-chat \
lobehub/lobe-chat
```
</Steps>
Ensure that you have sufficient permissions to stop and remove the container before executing these commands, and that Docker has sufficient permissions to pull the new image.
<Callout type={'info'}>
**If I redeploy, will I lose my local chat records?**
No need to worry, you won't. All of LobeChat's chat records are stored in your local browser. Therefore, when redeploying LobeChat using Docker, your chat records will not be lost.
</Callout>

View file

@ -1,12 +1,6 @@
# 自部署保持更新
import { Callout, Steps } from 'nextra/components';
## TOC
- [`A` Vercel\`\` / Zeabur 部署](#a-vercel--zeabur-部署)
- [启动自动更新](#启动自动更新)
- [`B` Docker 部署](#b-docker-部署)
## `A` Vercel\`\` / Zeabur 部署
## `A` Vercel / Zeabur 部署
如果你根据 README 中的一键部署步骤部署了自己的项目,你可能会发现总是被提示 “有可用更新”。这是因为 Vercel 默认为你创建新项目而非 fork 本项目,这将导致无法准确检测更新。我们建议按照以下步骤重新部署:
@ -16,33 +10,37 @@
### 启动自动更新
> \[!NOTE]
>
> 如果你在执行 `Upstream Sync` 时遇到错误,请手动执再行一次
<Callout>如果你在执行 `Upstream Sync` 时遇到错误,请尝试手动执再行一次</Callout>
当你 Fork 了项目后,由于 Github 的限制,你需要手动在你 Fork 的项目的 Actions 页面启用 Workflows并启动 Upstream Sync Action。启用后你可以设置每小时进行一次自动更新。
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/266985117-4d48fe7b-0412-4667-8129-b25ebcf2c9de.png)
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/266985177-7677b4ce-c348-4145-9f60-829d448d5be6.png)
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/266985117-4d48fe7b-0412-4667-8129-b25ebcf2c9de.png) ![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/266985177-7677b4ce-c348-4145-9f60-829d448d5be6.png)
## `B` Docker 部署
Docker 部署版本的升级非常简单,只需要重新部署 LobeChat 的最新镜像即可。 以下是执行这些步骤所需的指令:
1. 停止并删除当前运行的 LobeChat 容器(假设 LobeChat 容器的名称是 `lobe-chat`
<Steps>
### 停止并删除当前运行的 LobeChat 容器
假设 LobeChat 容器的名称是 `lobe-chat`,使用以下指令停止并删除当前运行的 LobeChat 容器:
```fish
docker stop lobe-chat
docker rm lobe-chat
```
2. 拉取 LobeChat 的最新 Docker 镜像:
### 拉取最新的 LobeChat 镜像
使用以下命令拉取 LobeChat 的最新 Docker 镜像:
```fish
docker pull lobehub/lobe-chat
```
3. 使用新拉取的镜像重新部署 LobeChat 容器:
### 重新启动 Docker 容器
使用新拉取的镜像重新部署 LobeChat 容器:
```fish
docker run -d -p 3210:3210 \
@ -53,10 +51,14 @@ docker run -d -p 3210:3210 \
lobehub/lobe-chat
```
</Steps>
确保在执行这些命令之前,您有足够的权限来停止和删除容器,并且 Docker 有足够的权限来拉取新的镜像。
> \[!NOTE]
>
> 重新部署的话,我本地的聊天记录会丢失吗?
>
> 放心LobeChat 的聊天记录全部都存储在你的本地浏览器中。因此使用 Docker 重新部署 LobeChat 时,你的聊天记录并不会丢失。
<Callout type={'info'}>
**重新部署的话,我本地的聊天记录会丢失吗?**
放心不会的。LobeChat 的聊天记录全部都存储在你的本地浏览器中。因此使用 Docker 重新部署 LobeChat 时,你的聊天记录并不会丢失。
</Callout>

View file

@ -0,0 +1,39 @@
import { Callout, Steps } from 'nextra/components';
# Vercel Deployment Guide
If you want to deploy LobeChat on Vercel, you can follow the steps below:
## Vercel Deployment Process
<Steps>
### Prepare your OpenAI API Key
Go to [OpenAI API Key](https://platform.openai.com/account/api-keys) to get your OpenAI API Key.
### Click the button below to deploy
[![][deploy-button-image]][deploy-link]
[deploy-button-image]: https://vercel.com/button
[deploy-link]: https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Flobehub%2Flobe-chat&env=OPENAI_API_KEY,ACCESS_CODE&envDescription=Find%20your%20OpenAI%20API%20Key%20by%20click%20the%20right%20Learn%20More%20button.%20%7C%20Access%20Code%20can%20protect%20your%20website&envLink=https%3A%2F%2Fplatform.openai.com%2Faccount%2Fapi-keys&project-name=lobe-chat&repository-name=lobe-chat
Simply log in with your GitHub account, and remember to fill in `OPENAI_API_KEY` (required) and `ACCESS_CODE` (recommended) in the environment variables page.
### After deployment, you can start using it
### Bind a custom domain (optional)
Vercel's assigned domain DNS may be polluted in some regions, so binding a custom domain can establish a direct connection.
</Steps>
## Automatic Synchronization of Updates
If you have deployed your project using the one-click deployment steps mentioned above, you may find that you are always prompted with "updates available." This is because Vercel creates a new project for you by default instead of forking this project, which causes the inability to accurately detect updates.
<Callout>
We recommend following the [Self-Hosting Upstream
Sync](/zh/self-hosting/upstream-sync) steps to Redeploy.
</Callout>

View file

@ -0,0 +1,39 @@
import { Callout, Steps } from 'nextra/components';
# Vercel 部署指引
如果想在 Vercel 上部署 LobeChat可以按照以下步骤进行操作
## Vercel 部署流程
<Steps>
### 准备好你的 OpenAI API Key
前往 [OpenAI API Key](https://platform.openai.com/account/api-keys) 获取你的 OpenAI API Key
### 点击下方按钮进行部署
[![][deploy-button-image]][deploy-link]
直接使用 GitHub 账号登录即可,记得在环境变量页填入 `OPENAI_API_KEY` (必填) and `ACCESS_CODE`(推荐);
[deploy-button-image]: https://vercel.com/button
[deploy-link]: https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Flobehub%2Flobe-chat&env=OPENAI_API_KEY,ACCESS_CODE&envDescription=Find%20your%20OpenAI%20API%20Key%20by%20click%20the%20right%20Learn%20More%20button.%20%7C%20Access%20Code%20can%20protect%20your%20website&envLink=https%3A%2F%2Fplatform.openai.com%2Faccount%2Fapi-keys&project-name=lobe-chat&repository-name=lobe-chat
### 部署完毕后,即可开始使用
### 绑定自定义域名(可选)
Vercel 分配的域名 DNS 在某些区域被污染了,绑定自定义域名即可直连。
</Steps>
## 自动同步更新
如果你根据上述中的一键部署步骤部署了自己的项目,你可能会发现总是被提示 “有可用更新”。这是因为 Vercel 默认为你创建新项目而非 fork 本项目,这将导致无法准确检测更新。
<Callout>
我们建议按照 [📘 LobeChat
自部署保持更新](/zh/self-hosting/upstream-sync) 步骤重新部署。
</Callout>

View file

@ -0,0 +1,35 @@
import { Callout } from 'nextra/components';
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/268670869-f1ffbf66-42b6-42cf-a937-9ce1f8328514.png)
# Assistant Market
In LobeChat's Assistant Market, creators can discover a vibrant and innovative community that brings together numerous carefully designed assistants. These assistants not only play a crucial role in work scenarios but also provide great convenience in the learning process. Our market is not just a showcase platform, but also a collaborative space. Here, everyone can contribute their wisdom and share their personally developed assistants.
<Callout type={'info'}>
By [🤖/🏪 submitting agents][submit-agents-link], you can easily submit your assistant works to
our platform. We particularly emphasize that LobeChat has established a sophisticated automated
internationalization (i18n) workflow, which excels in seamlessly converting your assistants into
multiple language versions. This means that regardless of the language your users are using, they
can seamlessly experience your assistant.{' '}
</Callout>
<Callout>
We welcome all users to join this ever-growing ecosystem and participate in the iteration and
optimization of assistants. Together, let's create more interesting, practical, and innovative
assistants, further enriching the diversity and practicality of assistants.
</Callout>
## Assistant Examples
| Recently Added | Assistant Description |
| --- | --- |
| [Copywriting](https://chat-preview.lobehub.com/market?agent=copywriting)<br/><sup>By **[pllz7](https://github.com/pllz7)** on **2024-02-14**</sup> | Proficient in persuasive copywriting and consumer psychology<br/>`E-commerce` |
| [Private Domain Operation Expert](https://chat-preview.lobehub.com/market?agent=gl-syyy)<br/><sup>By **[guling-io](https://github.com/guling-io)** on **2024-02-14**</sup> | Proficient in private domain operation, traffic acquisition, conversion, and content planning, familiar with marketing theories and related classic works.<br/>`Private domain operation` `Traffic acquisition` `Conversion` `Content planning` |
| [Self-media Operation Expert](https://chat-preview.lobehub.com/market?agent=gl-zmtyy)<br/><sup>By **[guling-io](https://github.com/guling-io)** on **2024-02-14**</sup> | Proficient in self-media operation and content creation<br/>`Self-media operation` `Social media` `Content creation` `Fan growth` `Brand promotion` |
| [Product Description](https://chat-preview.lobehub.com/market?agent=product-description)<br/><sup>By **[pllz7](https://github.com/pllz7)** on **2024-02-14**</sup> | Create captivating product descriptions to improve e-commerce sales performance<br/>`E-commerce` |
> 📊 Total agents: [<kbd>**177**</kbd> ](https://github.com/lobehub/lobe-chat-agents)
[submit-agents-link]: https://github.com/lobehub/lobe-chat-agents
[submit-agents-shield]: https://img.shields.io/badge/🤖/🏪_submit_agent-%E2%86%92-c4f042?labelColor=black&style=for-the-badge

Some files were not shown because too many files have changed in this diff Show more