mirror of
https://github.com/lobehub/lobehub
synced 2026-04-21 09:37:28 +00:00
📝 docs: Update changelog docs and release skills (#13897)
* 🔨 chore: update .vscode/settings.json (#13894) * 🐛 fix(builtin-tool-local-system): honor glob scope in local system tool (#13875) Made-with: Cursor * 📝 docs: Update changelog docs and release skills (#13897) - Update changelog documentation format across all historical changelog files - Merge release-changelog-style skill into version-release skill - Update changelog examples with improved formatting and structure Made-with: Cursor --------- Co-authored-by: YuTengjing <ytj2713151713@gmail.com> Co-authored-by: Innei <i@innei.in>
This commit is contained in:
parent
94b6827580
commit
549735be7f
79 changed files with 1977 additions and 1016 deletions
|
|
@ -5,6 +5,14 @@ description: "Version release workflow. Use when the user mentions 'release', 'h
|
|||
|
||||
# Version Release Workflow
|
||||
|
||||
## Mandatory Companion Skill
|
||||
|
||||
For every `/version-release` execution, you MUST load and apply:
|
||||
|
||||
- `../microcopy/SKILL.md`
|
||||
|
||||
Changelog style guidance is now fully embedded in this skill. Keep release facts unchanged, and only improve structure, readability, and tone.
|
||||
|
||||
## Overview
|
||||
|
||||
The primary development branch is **canary**. All day-to-day development happens on canary. When releasing, canary is merged into main. After merge, `auto-tag-release.yml` automatically handles tagging, version bumping, creating a GitHub Release, and syncing back to the canary branch.
|
||||
|
|
@ -150,6 +158,166 @@ All release PR bodies (both Minor and Patch) must include a user-facing changelo
|
|||
- Weekly Release: See `reference/changelog-example/weekly-release.md`
|
||||
- DB Migration: See `reference/changelog-example/db-migration.md`
|
||||
|
||||
### Mandatory Inputs Before Writing
|
||||
|
||||
1. Release diff context (`git log main..canary` and/or `git diff main...canary --stat`)
|
||||
2. Existing release template constraints (title, credits, trigger rules)
|
||||
3. `../microcopy/SKILL.md` terminology constraints
|
||||
|
||||
### Output Constraints (Hard Rules)
|
||||
|
||||
1. Keep all factual claims accurate to merged changes.
|
||||
2. Do not invent numbers, scope, timelines, or availability tiers.
|
||||
3. Keep release title and trigger-sensitive format unchanged.
|
||||
4. Keep `Credits` section intact (format required by project conventions).
|
||||
5. Prefer fewer headings and more natural narrative paragraphs.
|
||||
6. EN/ZH versions must cover the same facts in the same order.
|
||||
7. Prefer storytelling over feature enumeration.
|
||||
8. Avoid `Key Updates` sections that are only bullet dumps unless explicitly requested.
|
||||
|
||||
### Editorial Voice (Notion/Linear-Inspired)
|
||||
|
||||
Target a changelog voice that is calm, confident, and human:
|
||||
|
||||
- Start from user reality, not internal implementation.
|
||||
- Explain why this change matters before listing mechanics.
|
||||
- Keep tone practical and grounded, but allow a little product warmth.
|
||||
- Favor concrete workflow examples over abstract claims.
|
||||
- Write like an update from a thoughtful product team, not a marketing launch page.
|
||||
|
||||
### Writing Model (3-Pass Rewrite)
|
||||
|
||||
#### Pass 1: Remove AI Vocabulary and Filler
|
||||
|
||||
- Replace inflated words with simple alternatives.
|
||||
- Remove transition padding like "furthermore", "notably", "it is worth noting that".
|
||||
- Cut generic importance inflation ("pivotal", "testament", "game-changer").
|
||||
- Prefer direct verbs like `run`, `customize`, `manage`, `capture`, `improve`, `fix`.
|
||||
|
||||
#### Pass 2: Break AI Sentence Patterns
|
||||
|
||||
Avoid these structures:
|
||||
|
||||
- Parallel negation: "Not X, but Y"
|
||||
- Tricolon overload: "A, B, and C" used repeatedly
|
||||
- Rhetorical Q + answer: "What does this mean? It means..."
|
||||
- Dramatic reveal openers: "Here's the thing", "The result?"
|
||||
- Mirror symmetry in consecutive lines
|
||||
- Overuse of em dashes
|
||||
- Every paragraph ending in tidy "lesson learned" phrasing
|
||||
|
||||
#### Pass 3: Add Human Product Texture
|
||||
|
||||
- Lead with user-visible outcome, then explain mechanism.
|
||||
- Mix sentence lengths naturally.
|
||||
- Prefer straightforward phrasing over polished-but-empty language.
|
||||
- Keep confidence, but avoid launch-ad hype.
|
||||
- Write like a product team update, not a marketing page.
|
||||
|
||||
### Recommended Structure Blueprint
|
||||
|
||||
Use this shape unless the user asks otherwise:
|
||||
|
||||
1. `# 🚀 release: ...`
|
||||
2. One opening paragraph (2-4 sentences) that explains overall user impact.
|
||||
3. 2-4 narrative capability blocks (short headings optional):
|
||||
- each block = user value + key capability
|
||||
4. `Improvements and fixes` / `体验优化与修复` with concise bullets
|
||||
5. `Credits` with required mention format
|
||||
|
||||
### Length and Reading Density (Important)
|
||||
|
||||
Avoid overly short release notes when the diff is substantial.
|
||||
|
||||
- Weekly release PR body:
|
||||
- Usually target 350-700 English words (or equivalent Chinese length)
|
||||
- Keep 2-4 narrative sections, each with at least one real paragraph
|
||||
- Minor release PR body:
|
||||
- Usually target 500-1000 English words (or equivalent Chinese length)
|
||||
- Allow richer context and more concrete usage scenarios
|
||||
- DB migration release PR body:
|
||||
- Keep concise, but still include context + impact + operator notes
|
||||
- If there are many commits, increase narrative depth before adding more bullets.
|
||||
- If there are few commits, stay concise and do not pad content.
|
||||
|
||||
### Storytelling Contract (Major Capabilities)
|
||||
|
||||
For each major capability, write in this order:
|
||||
|
||||
1. Prior context/problem (briefly)
|
||||
2. What changed in this release
|
||||
3. Practical impact on user workflow
|
||||
|
||||
Do not collapse major capability sections into one-line bullets.
|
||||
|
||||
### Section Anatomy (Preferred)
|
||||
|
||||
Each major section should follow this internal rhythm:
|
||||
|
||||
1. Lead sentence: what changed and who benefits.
|
||||
2. Context sentence: what was painful, slow, or fragmented before.
|
||||
3. Mechanism paragraph: how the new behavior works in practice.
|
||||
4. Optional utility list (`Use X to:`) for actionable workflows.
|
||||
5. Optional availability closer when plan/platform constraints matter.
|
||||
|
||||
This pattern increases readability and makes changelogs more enjoyable to read without sacrificing precision.
|
||||
|
||||
### Section and Heading Heuristics
|
||||
|
||||
- Keep heading count low (typically 3-5).
|
||||
- Weekly release PR body target:
|
||||
- 1 opening paragraph
|
||||
- 2-4 major narrative sections
|
||||
- 1 improvements/fixes section
|
||||
- 1 credits section
|
||||
- Never produce heading-per-bullet layout.
|
||||
- If a section has 4+ bullets, convert into 2-3 short narrative paragraphs when possible.
|
||||
|
||||
### Linear-Style Block Pattern
|
||||
|
||||
Use this pattern when writing major sections:
|
||||
|
||||
```md
|
||||
## <Capability name>
|
||||
|
||||
<One sentence: what users can do now and why it matters.>
|
||||
|
||||
<One short paragraph: how this works in practice, in plain language.>
|
||||
|
||||
<Optional list for workflows>
|
||||
Use <feature> to:
|
||||
- <practical action 1>
|
||||
- <practical action 2>
|
||||
- <practical action 3>
|
||||
|
||||
<Optional availability sentence>
|
||||
```
|
||||
|
||||
### Notion-Style Readability Moves
|
||||
|
||||
Apply these moves when appropriate:
|
||||
|
||||
- Use one clear "scene" sentence to ground context (for example, what a team is doing when the feature helps).
|
||||
- Alternate paragraph lengths: one compact paragraph followed by a denser explanatory one.
|
||||
- Prefer specific nouns (`triage inbox`, `topic switch`, `mobile session`) over broad terms like "experience" or "workflow improvements".
|
||||
- Keep transitions natural (`Previously`, `Now`, `In practice`, `This means`) and avoid ornate writing.
|
||||
- End key sections with a practical takeaway sentence, not a slogan.
|
||||
|
||||
### Anti-Pattern Red Flags (Rewrite Required)
|
||||
|
||||
- "Key Updates" followed by only bullets and no narrative context
|
||||
- One bullet per feature with no prior context or user impact
|
||||
- Repeated template like "Feature X: did Y"
|
||||
- Heading-per-feature with no explanatory paragraph
|
||||
- Mechanical transitions with no causal flow
|
||||
|
||||
### EN/ZH Synchronization Rules
|
||||
|
||||
- Keep section order aligned.
|
||||
- Keep facts and scope aligned.
|
||||
- Localize naturally; avoid literal sentence mirroring.
|
||||
- If one language uses bullets for a section, the other should match style intent.
|
||||
|
||||
### Writing Tips
|
||||
|
||||
- **User-facing**: Describe changes that users can perceive, not internal implementation details
|
||||
|
|
@ -157,3 +325,20 @@ All release PR bodies (both Minor and Patch) must include a user-facing changelo
|
|||
- **Highlight key items**: Use `**bold**` for important feature names
|
||||
- **Credit contributors**: Collect all committers via `git log` and list alphabetically
|
||||
- **Flexible categories**: Choose categories based on actual changes — no need to force-fit all categories
|
||||
- **Terminology enforcement**: Ensure wording follows `microcopy` skill terminology and tone constraints
|
||||
- **Linear narrative enforcement**: Follow capability -> explanation -> optional "Use X to" list
|
||||
- **Storytelling enforcement**: For major updates, write in "before -> now -> impact" order
|
||||
- **Depth enforcement**: If the diff is non-trivial, prefer complete paragraphs over compressed bullet-only summaries
|
||||
- **Pleasure-to-read enforcement**: Include concrete examples and practical scenarios so readers can imagine using the capability
|
||||
|
||||
### Quick Checklist
|
||||
|
||||
- [ ] First paragraph explains user-visible release outcome
|
||||
- [ ] Heading count is minimal and meaningful
|
||||
- [ ] Major capabilities are short narrative paragraphs, not only bullets
|
||||
- [ ] Includes "before -> now -> impact" for major sections
|
||||
- [ ] No obvious AI patterns (parallel negation, rhetorical Q/A, dramatic reveal)
|
||||
- [ ] Vocabulary is plain, direct, and product-credible
|
||||
- [ ] Improvements/fixes remain concise and scannable
|
||||
- [ ] Credits format is preserved exactly
|
||||
- [ ] EN/ZH versions align in facts and order
|
||||
|
|
|
|||
|
|
@ -4,16 +4,27 @@ A changelog reference for database migration release PR bodies.
|
|||
|
||||
---
|
||||
|
||||
This release includes a **database schema migration** involving **5 new tables** for the Agent Evaluation Benchmark system.
|
||||
This release includes a **database schema migration** for Agent Evaluation Benchmark. We are adding **5 new tables** so benchmark setup, runs, and run-topic records can be stored in a complete and queryable structure.
|
||||
|
||||
### Migration: Add Agent Evaluation Benchmark Tables
|
||||
## Migration overview
|
||||
|
||||
- Added 5 new tables: `agent_eval_benchmarks`, `agent_eval_datasets`, `agent_eval_records`, `agent_eval_runs`, `agent_eval_run_topics`
|
||||
Previously, benchmark-related data lacked a full lifecycle model, which made it harder to track evaluation flow from dataset to run results. This migration introduces the missing relational layer so benchmark configuration, execution, and analysis records stay connected.
|
||||
|
||||
### Notes for Self-hosted Users
|
||||
In practical terms, this reduces ambiguity for downstream features and gives operators a cleaner foundation for troubleshooting and reporting.
|
||||
|
||||
- The migration runs automatically on application startup
|
||||
- No manual intervention required
|
||||
Added tables:
|
||||
|
||||
- `agent_eval_benchmarks`
|
||||
- `agent_eval_datasets`
|
||||
- `agent_eval_records`
|
||||
- `agent_eval_runs`
|
||||
- `agent_eval_run_topics`
|
||||
|
||||
## Notes for self-hosted users
|
||||
|
||||
- Migration runs automatically during app startup.
|
||||
- No manual SQL action is required in standard deployments.
|
||||
- As with any schema release, we still recommend database backup and rollout during a low-traffic window.
|
||||
|
||||
The migration owner: @{pr-author} — responsible for this database schema change, reach out for any migration-related issues.
|
||||
|
||||
|
|
|
|||
|
|
@ -4,42 +4,47 @@ A real-world changelog reference for weekly patch release PR bodies.
|
|||
|
||||
---
|
||||
|
||||
This release includes **82 commits** , Key updates are below.
|
||||
This weekly release includes **82 commits**. The throughline is simple: less friction when moving from idea to execution. Across agent workflows, model coverage, and desktop polish, this release removes several small blockers that used to interrupt momentum.
|
||||
|
||||
### New Features and Enhancements
|
||||
The result is not one headline feature, but a noticeably smoother week-to-week experience. Teams can evaluate agents with clearer structure, ship richer media flows, and spend less time debugging provider and platform edge cases.
|
||||
|
||||
- Added **Agent Benchmark** support for more systematic agent performance evaluation.
|
||||
- Introduced the **video generation** feature end-to-end, including entry points, sidebar "new" badge support, and skeleton loading for topic switching.
|
||||
- Expanded memory capabilities: support for memory effort/tool permission configuration and improved timeout calculation for memory analysis tasks.
|
||||
- Added desktop editor support for image upload via file picker.
|
||||
## Agent workflows and media generation
|
||||
|
||||
### Models and Provider Expansion
|
||||
Previously, some agent evaluation and media generation flows still felt fragmented: setup was manual, discoverability was uneven, and switching between topics could interrupt context. This release adds **Agent Benchmark** support and lands the **video generation** path end-to-end, from entry point to generation feedback.
|
||||
|
||||
- Added a new provider: **Straico**.
|
||||
- Added/updated support for:
|
||||
- Claude Sonnet 4.6
|
||||
- Gemini 3.1 Pro Preview
|
||||
- Qwen3.5 series
|
||||
- Grok Imagine (`grok-imagine-image`)
|
||||
- MiniMax 2.5
|
||||
- Added related i18n copy and model parameter adaptations.
|
||||
In practice, this means users can discover and run these workflows with fewer detours. Sidebar "new" indicators improve visibility, skeleton loading makes topic switches feel less abrupt, and memory-related controls now behave more predictably under real workload pressure.
|
||||
|
||||
### Desktop Improvements
|
||||
We also expanded memory controls with effort and tool-permission configuration, and improved timeout calculation for memory analysis tasks so longer runs fail less often in production-like usage.
|
||||
|
||||
- Integrated `electron-liquid-glass` (macOS Tahoe).
|
||||
- Improved DMG background assets and desktop release workflow.
|
||||
## Models and provider coverage
|
||||
|
||||
### Stability, Security, and UX Fixes
|
||||
Provider diversity matters most when teams can adopt new models without rewriting glue code every sprint. This release adds **Straico** and updates support for Claude Sonnet 4.6, Gemini 3.1 Pro Preview, Qwen3.5, Grok Imagine (`grok-imagine-image`), and MiniMax 2.5.
|
||||
|
||||
- Fixed multiple video generation pipeline issues: precharge refund handling, webhook token verification, pricing parameter usage, asset cleanup, and type safety.
|
||||
- Fixed `sanitizeFileName` path traversal risks and added unit tests.
|
||||
- Fixed MCP media URL generation with duplicated `APP_URL` prefix.
|
||||
Use these updates to:
|
||||
|
||||
- route requests to newly available providers
|
||||
- test newer model families without custom patching
|
||||
- keep model parameters and related i18n copy aligned across providers
|
||||
|
||||
This keeps model exploration practical: faster evaluation loops, fewer adaptation surprises, and cleaner cross-provider behavior.
|
||||
|
||||
## Desktop and platform polish
|
||||
|
||||
Desktop receives a set of quality-of-life upgrades that reduce "death by a thousand cuts" moments. We integrated `electron-liquid-glass` for macOS Tahoe and improved DMG background assets and packaging flow for more consistent release output.
|
||||
|
||||
The desktop editor now supports image upload from the file picker, which shortens everyday authoring steps and removes one more reason to switch tools mid-task.
|
||||
|
||||
## Improvements and fixes
|
||||
|
||||
- Fixed multiple video pipeline issues across precharge refund handling, webhook token verification, pricing parameter usage, asset cleanup, and type safety.
|
||||
- Fixed path traversal risk in `sanitizeFileName` and added corresponding unit tests.
|
||||
- Fixed MCP media URL generation when `APP_URL` was duplicated in output paths.
|
||||
- Fixed Qwen3 embedding failures caused by batch-size limits.
|
||||
- Fixed multiple UI/interaction issues, including mobile header agent selector/topic count, ChatInput scrolling behavior, and tooltip stacking context.
|
||||
- Fixed several UI interaction issues, including mobile header agent selector/topic count, ChatInput scrolling behavior, and tooltip stacking context.
|
||||
- Fixed missing `@napi-rs/canvas` native bindings in Docker standalone builds.
|
||||
- Improved GitHub Copilot authentication retry behavior and response error handling in edge cases.
|
||||
|
||||
### Credits
|
||||
## Credits
|
||||
|
||||
Huge thanks to these contributors (alphabetical):
|
||||
|
||||
|
|
|
|||
10
.vscode/settings.json
vendored
10
.vscode/settings.json
vendored
|
|
@ -6,7 +6,11 @@
|
|||
},
|
||||
"editor.formatOnSave": true,
|
||||
// don't show errors, but fix when save and git pre commit
|
||||
"eslint.rules.customizations": [],
|
||||
"eslint.rules.customizations": [
|
||||
{ "rule": "simple-import-sort/exports", "severity": "off" },
|
||||
{ "rule": "perfectionist/sort-interfaces", "severity": "off" },
|
||||
{ "rule": "simple-import-sort/imports", "severity": "off" }
|
||||
],
|
||||
"eslint.validate": [
|
||||
"json",
|
||||
"javascript",
|
||||
|
|
@ -16,7 +20,7 @@
|
|||
// support mdx
|
||||
"mdx"
|
||||
],
|
||||
"mdx.server.enable": false,
|
||||
"js/ts.tsdk.path": "node_modules/typescript/lib",
|
||||
"npm.packageManager": "pnpm",
|
||||
"search.exclude": {
|
||||
"**/node_modules": true,
|
||||
|
|
@ -44,9 +48,7 @@
|
|||
// make stylelint work with tsx antd-style css template string
|
||||
"typescriptreact"
|
||||
],
|
||||
"typescript.tsdk": "node_modules/typescript/lib",
|
||||
"vitest.disableWorkspaceWarning": true,
|
||||
"vitest.maximumConfigs": 10,
|
||||
"workbench.editor.customLabels.patterns": {
|
||||
"**/app/**/[[]*[]]/[[]*[]]/page.tsx": "${dirname(2)}/${dirname(1)}/${dirname} • page component",
|
||||
"**/app/**/[[]*[]]/page.tsx": "${dirname(1)}/${dirname} • page component",
|
||||
|
|
|
|||
|
|
@ -1,9 +1,6 @@
|
|||
---
|
||||
title: LobeHub Plugin Ecosystem - Functionality Extensions and Development Resources
|
||||
description: >-
|
||||
Discover how the LobeHub plugin ecosystem enhances the utility and flexibility
|
||||
of the LobeHub assistant, along with the development resources and plugin
|
||||
development guidelines provided.
|
||||
title: 'Plugin System: Extend Your Agents with Community Skills'
|
||||
description: LobeHub now supports a plugin ecosystem that lets Agents access real-time information, interact with external services, and handle specialized tasks without leaving the conversation.
|
||||
tags:
|
||||
- LobeHub
|
||||
- Plugins
|
||||
|
|
@ -13,12 +10,29 @@ tags:
|
|||
|
||||
# Supported Plugin System
|
||||
|
||||
The LobeHub plugin ecosystem is a significant extension of its core functionalities, greatly enhancing the utility and flexibility of the LobeHub assistant.
|
||||
LobeHub now supports plugins that extend what your Agents can do. Instead of being limited to built-in capabilities, Agents can now pull live data, interact with external platforms, and handle specialized workflows through community-built extensions.
|
||||
|
||||
<Video src="/blog/assets/28616219/f29475a3-f346-4196-a435-41a6373ab9e2.mp4" />
|
||||
|
||||
By leveraging plugins, the LobeHub assistants are capable of accessing and processing real-time information, such as searching online for data and providing users with timely and relevant insights.
|
||||
## Access real-time information
|
||||
|
||||
Moreover, these plugins are not solely limited to news aggregation; they can also extend to other practical functionalities, such as quickly retrieving documents, generating images, obtaining data from various platforms such as Bilibili and Steam, and interacting with an array of third-party services.
|
||||
Previously, conversations were limited to the knowledge cutoff of the underlying model. Now, with plugins like web search, your Agents can fetch current information—news, documentation, stock prices, or weather—right when you need it.
|
||||
|
||||
To learn more, please refer to the [Plugin Usage](/en/docs/usage/plugins/basic). Additionally, quality voice options (OpenAI Audio, Microsoft Edge Speech) are available to cater to users from different regions and cultural backgrounds. Users can select suitable voices based on personal preferences or specific situations, providing a personalized communication experience.
|
||||
Use plugins to:
|
||||
|
||||
- Run web searches and get up-to-date answers
|
||||
- Query documentation sites and technical references
|
||||
- Retrieve platform data from services like Bilibili or Steam
|
||||
- Generate images on demand during a conversation
|
||||
|
||||
## Community-powered flexibility
|
||||
|
||||
The plugin system is designed to grow with community contributions. Developers can build and share custom plugins that add new capabilities to any Agent. Users simply enable the plugins they need for their specific workflow.
|
||||
|
||||
This means your Agents become more specialized over time. A coding assistant might enable documentation search and code execution plugins. A creative assistant might use image generation and content research tools. The same underlying Agent adapts to different contexts through its enabled plugins.
|
||||
|
||||
## Voice options for natural interaction
|
||||
|
||||
Alongside plugin capabilities, LobeHub now offers quality voice synthesis options including OpenAI Audio and Microsoft Edge Speech. Choose a voice that matches your preference or scenario for more personalized interactions.
|
||||
|
||||
Learn more about plugin usage in our [documentation](/en/docs/usage/plugins/basic).
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: LobeHub 插件生态系统 - 功能扩展与开发资源
|
||||
description: 了解 LobeHub 插件生态系统如何增强 LobeHub 助手的实用性和灵活性,以及提供的开发资源和插件开发指南。
|
||||
title: '插件系统:用社区技能扩展你的助理'
|
||||
description: LobeHub 现已支持插件生态,让助理能够获取实时信息、与外部服务交互,并在对话中处理各种专业任务。
|
||||
tags:
|
||||
- LobeHub
|
||||
- 插件系统
|
||||
|
|
@ -10,12 +10,29 @@ tags:
|
|||
|
||||
# 支持插件系统
|
||||
|
||||
LobeHub 的插件生态系统是其核心功能的重要扩展,它极大地增强了 LobeHub 助手的实用性和灵活性。
|
||||
LobeHub 现已支持插件功能,大幅扩展了助理的能力边界。借助社区开发的插件,助理可以获取实时数据、与外部平台交互,并处理各种专业工作流,而无需离开对话界面。
|
||||
|
||||
<Video src="/blog/assets/28616219/f29475a3-f346-4196-a435-41a6373ab9e2.mp4" />
|
||||
|
||||
通过利用插件,LobeHub 的助手们能够实现实时信息的获取和处理,例如搜索网络信息,为用户提供即时且相关的资讯。
|
||||
## 获取实时信息
|
||||
|
||||
此外,这些插件不仅局限于新闻聚合,还可以扩展到其他实用的功能,如快速检索文档、生成图片、获取 Bilibili 、Steam 等各种平台数据,以及与其他各式各样的第三方服务交互。
|
||||
以往,对话内容受限于模型本身的知识截止日期。现在,通过联网搜索等插件,助理可以实时获取最新资讯 —— 无论是新闻、技术文档、股价还是天气信息,都能在需要时即时查询。
|
||||
|
||||
通过查看 [插件使用](/zh/docs/usage/plugins/basic) 了解更多。质的声音选项 (OpenAI Audio, Microsoft Edge Speech),以满足不同地域和文化背景用户的需求。用户可以根据个人喜好或者特定场景来选择合适的语音,从而获得个性化的交流体验。
|
||||
你可以使用插件来:
|
||||
|
||||
- 执行网页搜索,获得最新的答案
|
||||
- 查询技术文档和参考资料
|
||||
- 获取 Bilibili、Steam 等平台的数据
|
||||
- 在对话过程中按需生成图片
|
||||
|
||||
## 社区驱动的灵活性
|
||||
|
||||
插件系统设计为随社区贡献而不断成长。开发者可以构建并分享自定义插件,为任何助理添加新能力。用户只需根据具体工作流启用所需插件即可。
|
||||
|
||||
这意味着你的助理可以变得更加专业化。编程助理可以启用文档搜索和代码执行插件,创意助理可以使用图像生成和内容研究工具。同一个助理通过启用不同的插件,就能适应不同的使用场景。
|
||||
|
||||
## 自然的语音交互
|
||||
|
||||
除了插件能力之外,LobeHub 还提供了高品质的语音合成选项,包括 OpenAI Audio 和 Microsoft Edge Speech。你可以根据个人偏好或具体场景选择合适的声音,获得更个性化的交互体验。
|
||||
|
||||
了解更多插件使用方法,请查看[文档](/zh/docs/usage/plugins/basic)。
|
||||
|
|
|
|||
|
|
@ -1,12 +1,6 @@
|
|||
---
|
||||
title: >-
|
||||
LobeHub Supports Multimodal Interaction: Visual Recognition Enhances
|
||||
Intelligent Dialogue
|
||||
description: >-
|
||||
LobeHub supports various large language models with visual recognition
|
||||
capabilities, allowing users to upload or drag and drop images. The assistant
|
||||
will recognize the content and engage in intelligent dialogue, creating a more
|
||||
intelligent and diverse chat environment.
|
||||
title: 'Visual Recognition: Chat With Images, Not Just Text'
|
||||
description: LobeHub now supports multimodal models including GPT-4 Vision, Google Gemini Pro Vision, and GLM-4 Vision. Upload or drag images into conversations and your Agent will understand and respond to visual content.
|
||||
tags:
|
||||
- Visual Recognition
|
||||
- LobeHub
|
||||
|
|
@ -17,6 +11,25 @@ tags:
|
|||
|
||||
# Supported Models for Visual Recognition
|
||||
|
||||
LobeHub now supports several large language models with visual recognition capabilities, including OpenAI's [`gpt-4-vision`](https://platform.openai.com/docs/guides/vision), Google Gemini Pro vision, and Zhiyuan GLM-4 Vision. This empowers LobeHub with multimodal interaction capabilities. Users can effortlessly upload images or drag and drop them into the chat window, where the assistant can recognize the image content and engage in intelligent dialogue, building a smarter and more diverse chat experience.
|
||||
Conversations in LobeHub are no longer limited to text. We now support several large language models with visual recognition capabilities, including OpenAI's [`gpt-4-vision`](https://platform.openai.com/docs/guides/vision), Google Gemini Pro Vision, and Zhiyuan GLM-4 Vision.
|
||||
|
||||
This feature opens up new avenues for interaction, allowing communication that extends beyond text to include rich visual elements. Whether sharing images during everyday use or interpreting graphics in specific industries, the assistant delivers an exceptional conversational experience. Additionally, we have carefully selected a range of high-quality voice options (OpenAI Audio, Microsoft Edge Speech) to cater to users from different regions and cultural backgrounds. Users can choose a suitable voice based on personal preferences or specific contexts, thus receiving a more personalized communication experience.
|
||||
## Share images naturally
|
||||
|
||||
Upload an image or drag it directly into the chat window, and your Agent can understand the visual content and continue the discussion in context. This works for screenshots, photos, diagrams, or any visual reference you need to share.
|
||||
|
||||
This brings a more natural multimodal experience to both everyday and professional scenarios:
|
||||
|
||||
- Share photos from your day and discuss them
|
||||
- Upload UI screenshots for design feedback
|
||||
- Share diagrams and get explanations
|
||||
- Reference visual content without describing it in words
|
||||
|
||||
## Context-aware visual understanding
|
||||
|
||||
The assistant doesn't just see the image—it understands it within the ongoing conversation. Ask follow-up questions about specific details, compare multiple images, or use visuals as reference material for complex discussions.
|
||||
|
||||
For specialized fields, this means clearer context and more practical responses. Medical imaging discussions, architectural reviews, or technical diagram analysis all become more natural when both parties can see the same visual reference.
|
||||
|
||||
## Voice options for personalized interaction
|
||||
|
||||
To better serve users across regions and preferences, we've also added quality voice options from OpenAI Audio and Microsoft Edge Speech. Choose a voice that fits your style or scenario for more personalized interactions.
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: LobeHub 支持多模态交互:视觉识别助力智能对话
|
||||
description: LobeHub 支持多种具有视觉识别能力的大语言模型,用户可上传或拖拽图片,助手将识别内容并展开智能对话,打造更智能、多元化的聊天场景。
|
||||
title: '视觉识别:与图片对话,不只是文字'
|
||||
description: LobeHub 现已支持多模态模型,包括 GPT-4 Vision、Google Gemini Pro Vision 和 GLM-4 Vision。上传或拖拽图片到对话中,助理将理解视觉内容并作出回应。
|
||||
tags:
|
||||
- 视觉识别
|
||||
- 多模态交互
|
||||
|
|
@ -11,6 +11,25 @@ tags:
|
|||
|
||||
# 支持模型视觉识别
|
||||
|
||||
LobeHub 已经支持 OpenAI 的 [`gpt-4-vision`](https://platform.openai.com/docs/guides/vision) 、Google Gemini Pro vision、智谱 GLM-4 Vision 等具有视觉识别能力的大语言模型,这使得 LobeHub 具备了多模态交互的能力。用户可以轻松上传图片或者拖拽图片到对话框中,助手将能够识别图片内容,并在此基础上进行智能对话,构建更智能、更多元化的聊天场景。
|
||||
LobeHub 的对话不再局限于纯文字。我们现已支持多个具备视觉识别能力的大语言模型,包括 OpenAI 的 [`gpt-4-vision`](https://platform.openai.com/docs/guides/vision)、Google Gemini Pro Vision,以及智谱 GLM-4 Vision。
|
||||
|
||||
这一特性打开了新的互动方式,使得交流不再局限于文字,而是可以涵盖丰富的视觉元素。无论是日常使用中的图片分享,还是在特定行业内的图像解读,助手都能提供出色的对话体验。,我们精心挑选了一系列高品质的声音选项 (OpenAI Audio, Microsoft Edge Speech),以满足不同地域和文化背景用户的需求。用户可以根据个人喜好或者特定场景来选择合适的语音,从而获得个性化的交流体验。
|
||||
## 自然地分享图片
|
||||
|
||||
上传图片或直接拖拽到对话框,助理就能理解视觉内容并基于上下文继续对话。无论是截图、照片、图表还是任何视觉参考,都能轻松分享。
|
||||
|
||||
这为日常场景和专业场景带来了更自然的多模态体验:
|
||||
|
||||
- 分享生活中的照片并展开讨论
|
||||
- 上传界面截图获取设计反馈
|
||||
- 分享图表并获得解读
|
||||
- 引用视觉内容而无需用文字描述
|
||||
|
||||
## 上下文感知的视觉理解
|
||||
|
||||
助理不只是 "看见" 图片 —— 它能在持续对话中理解图片内容。你可以针对特定细节追问、比较多张图片,或将视觉资料作为复杂讨论的参考。
|
||||
|
||||
对于专业领域,这意味着更清晰的上下文和更实用的回复。医学影像讨论、建筑方案评审或技术图表分析,当双方都能看到相同的视觉参考时,交流变得更加自然高效。
|
||||
|
||||
## 个性化的语音交互
|
||||
|
||||
为了更好地服务不同地区和偏好的用户,我们还加入了 OpenAI Audio 和 Microsoft Edge Speech 的高品质语音选项。选择符合你风格或场景的声音,获得更个性化的交互体验。
|
||||
|
|
|
|||
|
|
@ -1,9 +1,6 @@
|
|||
---
|
||||
title: LobeHub Text-to-Image Generation Technology
|
||||
description: >-
|
||||
LobeHub supports Text-to-Speech (TTS) and Speech-to-Text (STT) technologies,
|
||||
offering high-quality voice options for a personalized communication
|
||||
experience. Learn more about Lobe TTS Toolkit.
|
||||
title: 'Voice Conversations: Talk Naturally With Your Agents'
|
||||
description: LobeHub now supports Text-to-Speech (TTS) and Speech-to-Text (STT), enabling natural voice interactions. Speak with your Agents and hear responses in clear, personalized voices.
|
||||
tags:
|
||||
- TTS
|
||||
- STT
|
||||
|
|
@ -14,6 +11,24 @@ tags:
|
|||
|
||||
# Supporting TTS & STT Voice Conversations
|
||||
|
||||
LobeHub supports Text-to-Speech (TTS) and Speech-to-Text (STT) technologies, allowing our application to transform textual information into clear voice output. Users can interact with our conversational agents as if they were talking to a real person. There are various voice options for users to choose from, providing the right audio source for their assistant. Additionally, for those who prefer auditory learning or seek to gain information while on the go, TTS offers an excellent solution.
|
||||
LobeHub now supports Text-to-Speech (TTS) and Speech-to-Text (STT), turning typed conversations into natural voice interactions. You can speak with your Agents and hear their responses, making the experience closer to talking with a real person.
|
||||
|
||||
In LobeHub, we have carefully curated a selection of high-quality voice options (OpenAI Audio, Microsoft Edge Speech) to cater to users from different regions and cultural backgrounds. Users can select suitable voices based on personal preferences or specific scenarios, thus achieving a personalized communication experience.
|
||||
## Natural voice interaction
|
||||
|
||||
With TTS, your Agents can read responses aloud in clear, natural-sounding voices. With STT, you can dictate messages instead of typing. Together, they enable hands-free interaction—useful when you're multitasking, on the move, or simply prefer speaking to typing.
|
||||
|
||||
This is especially helpful for:
|
||||
|
||||
- Auditory learners who process information better by hearing
|
||||
- Users who want to stay productive while commuting or away from a keyboard
|
||||
- Anyone who finds voice more accessible or convenient than text
|
||||
|
||||
## Personalized voice selection
|
||||
|
||||
Different Agents can have different voices. Choose a voice that matches each Agent's personality or purpose. A professional assistant might use a calm, measured tone. A creative collaborator might sound more expressive.
|
||||
|
||||
We've curated high-quality voices from OpenAI Audio and Microsoft Edge Speech to serve users across regions and preferences. Select the voice that fits your usage style or scenario.
|
||||
|
||||
## A complete communication loop
|
||||
|
||||
Voice support closes the gap between human and AI interaction styles. Speak naturally, hear responses aloud, and maintain context just like you would in a spoken conversation. The rest of LobeHub's features—plugins, multimodal support, context management—work seamlessly alongside voice mode.
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: LobeHub 文生图:文本转图片生成技术
|
||||
description: LobeHub 支持文字转语音(TTS)和语音转文字(STT)技术,提供高品质声音选项,个性化交流体验。了解更多关于 Lobe TTS 工具包。
|
||||
title: '语音会话:与你的助理自然对话'
|
||||
description: LobeHub 现已支持文字转语音(TTS)和语音转文字(STT),实现自然的语音交互。与助理对话并听到清晰、个性化的语音回复。
|
||||
tags:
|
||||
- TTS
|
||||
- STT
|
||||
|
|
@ -12,6 +12,24 @@ tags:
|
|||
|
||||
# 支持 TTS & STT 语音会话
|
||||
|
||||
LobeHub 支持文字转语音(Text-to-Speech,TTS)和语音转文字(Speech-to-Text,STT)技术,我们的应用能够将文本信息转化为清晰的语音输出,用户可以像与真人交谈一样与我们的对话代理进行交流。用户可以从多种声音中选择,给助手搭配合适的音源。 同时,对于那些倾向于听觉学习或者想要在忙碌中获取信息的用户来说,TTS 提供了一个极佳的解决方案。
|
||||
LobeHub 现已支持文字转语音(TTS)和语音转文字(STT),将文字对话转化为自然的语音交互。你可以与助理对话并听到它们的回复,体验更接近与真人交流。
|
||||
|
||||
在 LobeHub 中,我们精心挑选了一系列高品质的声音选项 (OpenAI Audio, Microsoft Edge Speech),以满足不同地域和文化背景用户的需求。用户可以根据个人喜好或者特定场景来选择合适的语音,从而获得个性化的交流体验。
|
||||
## 自然的语音交互
|
||||
|
||||
借助 TTS,助理可以用清晰自然的声音朗读回复。借助 STT,你可以用语音输入代替打字。两者结合,实现了免提交互 —— 当你正在处理其他事务、在通勤途中,或单纯更喜欢说话时,这项功能尤其实用。
|
||||
|
||||
语音功能特别适合:
|
||||
|
||||
- 听觉型学习者,通过聆听更好地处理信息
|
||||
- 希望在通勤或远离键盘时保持高效的用户
|
||||
- 觉得语音比文字更便捷或更易用的用户
|
||||
|
||||
## 个性化的声音选择
|
||||
|
||||
不同的助理可以配备不同的声音。你可以根据每个助理的性格或用途选择合适的声音。专业的助理可以使用沉稳、从容的语调,创意型的助理则可以更加富有表现力。
|
||||
|
||||
我们精选了 OpenAI Audio 和 Microsoft Edge Speech 的高品质声音选项,以服务不同地区和偏好的用户。选择最符合你使用风格或场景的语音。
|
||||
|
||||
## 完整的交流闭环
|
||||
|
||||
语音支持弥合了人类与 AI 交互方式之间的差距。自然地说话,听到语音回复,并像在真实对话中一样保持上下文。LobeHub 的其他功能 —— 插件、多模态支持、上下文管理 —— 都能与语音模式无缝协作。
|
||||
|
|
|
|||
|
|
@ -1,11 +1,6 @@
|
|||
---
|
||||
title: 'LobeHub Text-to-Image: Text-to-Image Generation Technology'
|
||||
description: >-
|
||||
LobeHub now supports the latest text-to-image generation technology, allowing
|
||||
users to directly invoke the text-to-image tool during conversations with the
|
||||
assistant for creative purposes. By utilizing AI tools such as DALL-E 3,
|
||||
MidJourney, and Pollinations, assistants can turn your ideas into images,
|
||||
making the creative process more intimate and immersive.
|
||||
title: 'Text-to-Image: Create Visuals Directly in Chat'
|
||||
description: LobeHub now supports text-to-image generation. Invoke DALL-E 3, MidJourney, or Pollinations directly during conversations to turn your ideas into images without leaving the chat.
|
||||
tags:
|
||||
- Text-to-Image
|
||||
- LobeHub
|
||||
|
|
@ -16,4 +11,18 @@ tags:
|
|||
|
||||
# Support for Text-to-Image Generation
|
||||
|
||||
The latest text-to-image generation technology is now supported, enabling LobeHub users to directly use the text-to-image tool during conversations with their assistant. By harnessing the capabilities of AI tools like [`DALL-E 3`](https://openai.com/dall-e-3), [`MidJourney`](https://www.midjourney.com/), and [`Pollinations`](https://pollinations.ai/), assistants can now transform your ideas into images. This allows for a more intimate and immersive creative process.
|
||||
LobeHub now supports text-to-image generation, so you can create images directly while chatting with your Agents. With tools like [`DALL-E 3`](https://openai.com/dall-e-3), [`MidJourney`](https://www.midjourney.com/), and [`Pollinations`](https://pollinations.ai/), your Agents can turn your descriptions into visuals within the same conversation flow.
|
||||
|
||||
## Creative workflow without switching tools
|
||||
|
||||
Previously, generating AI images meant leaving your conversation, opening a separate tool, writing a prompt, waiting for results, then copying the image back. Now you simply describe what you want, and your Agent produces the image right there in the chat.
|
||||
|
||||
This keeps creative momentum flowing. Iterate on ideas quickly—request adjustments, explore variations, or refine descriptions—all without context switching. The conversation history maintains your creative direction, so you can reference previous ideas and build on them.
|
||||
|
||||
## Private and immersive creation
|
||||
|
||||
Image generation happens within your existing conversation, keeping your creative process contained and private. No need to manage separate accounts or jump between platforms. Your prompts, iterations, and final images stay in one place, organized alongside the rest of your discussion.
|
||||
|
||||
## Multiple generation options
|
||||
|
||||
Different tools excel at different styles. DALL-E 3 offers detailed, precise renders. MidJourney produces artistic, atmospheric results. Pollinations provides fast, accessible generation. Your Agent can help you choose and use the right tool for each creative task.
|
||||
|
|
|
|||
|
|
@ -1,8 +1,6 @@
|
|||
---
|
||||
title: LobeHub 文生图:文本转图片生成技术
|
||||
description: >-
|
||||
LobeHub 现在支持最新的文本到图片生成技术,让用户可以在与助手对话中直接调用文生图工具进行创作。利用 DALL-E 3、MidJourney 和
|
||||
Pollinations 等 AI 工具,助手们可以将你的想法转化为图像,让创作过程更私密和沉浸式。
|
||||
title: '文生图:在对话中直接创作视觉内容'
|
||||
description: LobeHub 现已支持文本到图片生成。在对话中直接调用 DALL-E 3、MidJourney 或 Pollinations,无需离开聊天界面即可将想法转化为图像。
|
||||
tags:
|
||||
- Text to Image
|
||||
- 文生图
|
||||
|
|
@ -11,4 +9,18 @@ tags:
|
|||
|
||||
# 支持 Text to Image 文生图
|
||||
|
||||
现已支持最新的文本到图片生成技术,LobeHub 现在能够让用户在与助手对话中直接调用文成图工具进行创作。通过利用 [`DALL-E 3`](https://openai.com/dall-e-3)、[`MidJourney`](https://www.midjourney.com/) 和 [`Pollinations`](https://pollinations.ai/) 等 AI 工具的能力, 助手们现在可以将你的想法转化为图像。同时可以更私密和沉浸式的完成你的创造过程。
|
||||
LobeHub 现已支持文生图功能,你可以在与助理对话时直接创作图像。借助 [`DALL-E 3`](https://openai.com/dall-e-3)、[`MidJourney`](https://www.midjourney.com/) 和 [`Pollinations`](https://pollinations.ai/) 等工具,助理可以在同一段对话中将你的描述转化为视觉作品。
|
||||
|
||||
## 无需切换工具的创意工作流
|
||||
|
||||
以往,生成 AI 图像意味着离开对话、打开另一个工具、编写提示词、等待生成结果,然后再把图片复制回来。现在,你只需描述想要什么,助理就能直接在对话中生成图像。
|
||||
|
||||
这让创意节奏保持流畅。快速迭代想法 —— 要求调整、探索变体、优化描述 —— 都无需切换上下文。对话历史会保留你的创作方向,让你可以引用之前的想法并在此基础上继续发展。
|
||||
|
||||
## 私密且沉浸的创作体验
|
||||
|
||||
图像生成在现有对话中完成,让创作过程保持封闭和私密。无需管理多个账号或在不同平台之间跳转。你的提示词、迭代过程和最终图像都保存在同一个地方,与讨论内容一起有序管理。
|
||||
|
||||
## 多种生成选项
|
||||
|
||||
不同工具擅长不同风格。DALL-E 3 提供精细、准确的渲染效果,MidJourney 产出富有艺术感和氛围感的作品,Pollinations 提供快速、便捷的生成能力。你的助理可以帮你为每个创作任务选择和调用合适的工具。
|
||||
|
|
|
|||
|
|
@ -1,8 +1,9 @@
|
|||
---
|
||||
title: LobeHub Supports Multi-User Management with Clerk and Next-Auth
|
||||
title: Authentication That Adapts to Your Stack
|
||||
description: >-
|
||||
LobeHub offers various user authentication and management solutions, including
|
||||
Clerk and Next-Auth, to meet the diverse needs of different users.
|
||||
LobeHub now supports both Clerk and Next-Auth, giving teams flexibility to
|
||||
choose the authentication approach that fits their deployment model and
|
||||
security requirements.
|
||||
tags:
|
||||
- User Management
|
||||
- Next-Auth
|
||||
|
|
@ -11,24 +12,36 @@ tags:
|
|||
- Multi-Factor Authentication
|
||||
---
|
||||
|
||||
# Support for Multi-User Management with Clerk and Next-Auth
|
||||
# Authentication That Adapts to Your Stack
|
||||
|
||||
In modern applications, user management and authentication are crucial features. To cater to the diverse needs of users, LobeHub provides two primary user authentication and management solutions: `next-auth` and `Clerk`. Whether you're looking for simple user registration and login or need more advanced multi-factor authentication and user management, LobeHub can flexibly accommodate your requirements.
|
||||
Every product needs reliable sign-in, but not every team has the same requirements. Some need to get up and running quickly with social logins. Others need enterprise-grade controls from day one. LobeHub now supports both paths by integrating with next-auth and Clerk.
|
||||
|
||||
## Next-Auth: A Flexible and Powerful Authentication Library
|
||||
This gives teams the freedom to start simple and upgrade security when the time is right—without rethinking their entire auth architecture.
|
||||
|
||||
LobeHub integrates `next-auth`, a flexible and powerful authentication library that supports various authentication methods, including OAuth, email login, and credential-based login. With `next-auth`, you can easily implement the following features:
|
||||
## Next-Auth: Start Fast, Stay Flexible
|
||||
|
||||
- **User Registration and Login**: Supports multiple authentication methods to meet different user needs.
|
||||
- **Session Management**: Efficiently manage user sessions to ensure security.
|
||||
- **Social Login**: Quick login options for various social media platforms.
|
||||
- **Data Security**: Protects user data privacy and security.
|
||||
Next-Auth provides a straightforward authentication layer for teams that want to ship quickly. It handles the essentials: OAuth from major providers, email-based login, and credential-based flows, all without managing a separate user service.
|
||||
|
||||
## Clerk: A Modern User Management Platform
|
||||
Use this when you need:
|
||||
|
||||
For users who require more advanced user management capabilities, LobeHub also supports [Clerk](https://clerk.com), a modern user management platform. Clerk offers a richer set of features, helping you achieve enhanced security and flexibility:
|
||||
- Quick setup with social providers like GitHub or Google
|
||||
- Session management that just works
|
||||
- Full control over the sign-in UI and flow
|
||||
- Privacy-friendly auth that keeps user data in your infrastructure
|
||||
|
||||
- **Multi-Factor Authentication (MFA)**: Provides an additional layer of security.
|
||||
- **User Profile Management**: Easily manage user information and settings.
|
||||
- **Login Activity Monitoring**: Real-time monitoring of user login activities to ensure account security.
|
||||
- **Scalability**: Supports complex user management needs.
|
||||
## Clerk: Enterprise-Ready Identity
|
||||
|
||||
When you need more than sign-in—multi-factor authentication, user profiles, and login activity monitoring—Clerk provides those capabilities out of the box. It's a managed identity platform that scales with your product.
|
||||
|
||||
Switch to Clerk when you need:
|
||||
|
||||
- MFA for sensitive accounts or compliance requirements
|
||||
- Built-in user profile and account management UI
|
||||
- Real-time login activity tracking
|
||||
- Scalable identity infrastructure without operational overhead
|
||||
|
||||
## Improvements and fixes
|
||||
|
||||
- Added support for next-auth v5 beta with improved session handling
|
||||
- Fixed redirect loop issues when using custom sign-in pages
|
||||
- Improved error messages for failed OAuth connections
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
title: LobeHub 支持 Clerk 与 Next-Auth 多用户管理支持
|
||||
description: LobeHub 提供 Clerk 和 Next-Auth 等多种用户认证和管理方案,以满足不同用户的需求。
|
||||
title: 灵活适配的认证体系:Clerk 与 Next-Auth 双方案支持
|
||||
description: >-
|
||||
LobeHub 现已支持 Clerk 和 Next-Auth 两种认证方案,让团队可以根据部署模式和安全需求选择最适合的身份验证方式。
|
||||
tags:
|
||||
- 用户管理
|
||||
- 身份验证
|
||||
|
|
@ -9,24 +10,36 @@ tags:
|
|||
- 多因素认证
|
||||
---
|
||||
|
||||
# 支持 Clerk 与 Next-Auth 多用户管理支持
|
||||
# 灵活适配的认证体系:Clerk 与 Next-Auth 双方案支持
|
||||
|
||||
在现代应用中,用户管理和身份验证是至关重要的功能。为满足不同用户的多样化需求,LobeHub 提供了两种主要的用户认证和管理方案:`next-auth` 和 `Clerk`。无论您是追求简便的用户注册登录,还是需要更高级的多因素认证和用户管理,LobeHub 都可以灵活实现。
|
||||
每个产品都需要可靠的登录系统,但不同团队的起点并不相同。有些团队希望快速接入社交登录上线,有些则需要企业级的安全管控。LobeHub 现在同时支持两种路径 —— 集成 next-auth 和 Clerk,满足不同阶段的需求。
|
||||
|
||||
## next-auth:灵活且强大的身份验证库
|
||||
这让团队可以自由选择:先以简单方案快速启动,待业务需要时再升级安全能力,无需重构整个认证架构。
|
||||
|
||||
LobeHub 集成了 `next-auth`,一个灵活且强大的身份验证库,支持多种身份验证方式,包括 OAuth、邮件登录、凭证登录等。通过 `next-auth`,您可以轻松实现以下功能:
|
||||
## Next-Auth:快速启动,灵活可控
|
||||
|
||||
- **用户注册和登录**:支持多种认证方式,满足不同用户的需求。
|
||||
- **会话管理**:高效管理用户会话,确保安全性。
|
||||
- **社交登录**:支持多种社交平台的快捷登录。
|
||||
- **数据安全**:保障用户数据的安全性和隐私性。
|
||||
Next-Auth 为希望快速交付的团队提供了轻量级认证层。它覆盖核心能力:主流 OAuth 提供商、邮箱登录、凭证登录,无需管理独立用户服务。
|
||||
|
||||
## Clerk:现代化用户管理平台
|
||||
适用场景:
|
||||
|
||||
对于需要更高级用户管理功能的用户,LobeHub 还支持 [Clerk](https://clerk.com) ,一个现代化的用户管理平台。Clerk 提供了更丰富的功能,帮助您实现更高的安全性和灵活性:
|
||||
- 需要快速接入 GitHub、Google 等社交登录
|
||||
- 开箱即用的会话管理
|
||||
- 对登录界面和流程的完全控制
|
||||
- 用户数据保留在自有基础设施中的隐私友好方案
|
||||
|
||||
- **多因素认证 (MFA)**:提供更高的安全保障。
|
||||
- **用户配置文件管理**:便捷管理用户信息和配置。
|
||||
- **登录活动监控**:实时监控用户登录活动,确保账户安全。
|
||||
- **扩展性**:支持复杂的用户管理需求。
|
||||
## Clerk:企业级身份管理
|
||||
|
||||
当你需要的不仅是登录 —— 多因素认证、用户档案管理、登录行为监控 ——Clerk 提供开箱即用的完整能力。作为托管身份平台,它随产品规模自动扩展。
|
||||
|
||||
适用场景:
|
||||
|
||||
- 敏感账户需要 MFA 或满足合规要求
|
||||
- 需要内置的用户资料与账户管理界面
|
||||
- 实时监控登录活动
|
||||
- 免运维的高扩展性身份基础设施
|
||||
|
||||
## 体验优化与修复
|
||||
|
||||
- 新增 next-auth v5 beta 支持,改进会话处理机制
|
||||
- 修复自定义登录页面导致的重定向循环问题
|
||||
- 优化 OAuth 连接失败时的错误提示信息
|
||||
|
|
|
|||
|
|
@ -1,6 +1,8 @@
|
|||
---
|
||||
title: LobeHub Supports Ollama for Local Large Language Model (LLM) Calls
|
||||
description: LobeHub v0.127.0 supports using Ollama to call local large language models.
|
||||
title: Run Local Models Alongside Cloud AIs
|
||||
description: >-
|
||||
LobeHub v0.127.0 adds Ollama support, letting you run local large language
|
||||
models with the same interface you use for cloud providers.
|
||||
tags:
|
||||
- Ollama AI
|
||||
- LobeHub
|
||||
|
|
@ -9,20 +11,35 @@ tags:
|
|||
- GPT-4
|
||||
---
|
||||
|
||||
# Support for Ollama Calls to Local Large Language Models 🦙
|
||||
# Run Local Models Alongside Cloud AIs
|
||||
|
||||
With the release of LobeHub v0.127.0, we're excited to introduce a fantastic new feature—Ollama AI support! 🤯 Thanks to the robust infrastructure provided by [Ollama AI](https://ollama.ai/) and the [efforts of the community](https://github.com/lobehub/lobe-chat/pull/1265), you can now interact with local LLMs (Large Language Models) within LobeHub! 🤩
|
||||
Cloud models are powerful, but sometimes you need data to stay local. Maybe it's a sensitive project. Maybe you want to experiment without API costs. Maybe you just like the idea of owning the entire stack. LobeHub v0.127.0 now supports Ollama, giving you the same chat experience whether your model lives in the cloud or on your machine.
|
||||
|
||||
We are thrilled to unveil this revolutionary feature to all LobeHub users at this special moment. The integration of Ollama AI not only represents a significant leap in our technology but also reaffirms our commitment to continuously seek more efficient and intelligent ways of communication with our users.
|
||||
No separate interface to learn. No workflow fragmentation. Just point LobeHub at your local Ollama instance and start chatting.
|
||||
|
||||
## 💡 How to Start a Conversation with Local LLMs?
|
||||
## Connect Your Local Models in One Line
|
||||
|
||||
If you're facing challenges with private deployments, we strongly recommend trying out the LobeHub Cloud service. We offer comprehensive model support to help you easily embark on your AI conversation journey.
|
||||
|
||||
Experience the newly upgraded LobeHub v1.6 and feel the powerful conversational capabilities brought by GPT-4!
|
||||
Getting started is straightforward. If you already have Ollama running, connect LobeHub with a single Docker command:
|
||||
|
||||
```bash
|
||||
docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://host.docker.internal:11434/v1 lobehub/lobe-chat
|
||||
```
|
||||
|
||||
Yes, it's that simple! 🤩 You don't need to go through complicated configurations or worry about intricate installation processes. We've prepared everything for you; just one command is all it takes to start deep conversations with local AI.
|
||||
That's it. LobeHub detects your local models and makes them available in the same model switcher you use for GPT-4, Claude, and others. Mix cloud and local models in the same workspace depending on what each conversation needs.
|
||||
|
||||
## When to Use Local Models
|
||||
|
||||
- **Privacy-first work**: Keep sensitive conversations on your machine
|
||||
- **Cost control**: No per-token charges for experimentation
|
||||
- **Offline access**: Continue working without internet connectivity
|
||||
- **Model testing**: Evaluate open-source models before production deployment
|
||||
|
||||
## Improvements and fixes
|
||||
|
||||
- Added automatic model discovery from Ollama endpoints
|
||||
- Fixed streaming response handling for local model compatibility
|
||||
- Improved error handling when Ollama service is unreachable
|
||||
|
||||
## Credits
|
||||
|
||||
Huge thanks to [the community contributor](https://github.com/lobehub/lobe-chat/pull/1265) who made Ollama integration possible, and to the Ollama team for building accessible local AI infrastructure.
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
title: LobeHub 支持 Ollama 调用本地大语言模型(LLM)
|
||||
description: LobeHub vLobeHub v0.127.0 支持 Ollama 调用本地大语言模型。
|
||||
title: 本地模型与云端 AI 并行使用
|
||||
description: >-
|
||||
LobeHub v0.127.0 新增 Ollama 支持,让你可以用与云端模型相同的界面运行本地大语言模型。
|
||||
tags:
|
||||
- Ollama AI
|
||||
- LobeHub
|
||||
|
|
@ -8,20 +9,35 @@ tags:
|
|||
- AI 对话
|
||||
---
|
||||
|
||||
# 支持 Ollama 调用本地大语言模型 🦙
|
||||
# 本地模型与云端 AI 并行使用
|
||||
|
||||
随着 LobeHub v0.127.0 的发布,我们迎来了一个激动人心的特性 —— Ollama AI 支持!🤯 在 [Ollama AI](https://ollama.ai/) 强大的基础设施和 [社区的共同努力](https://github.com/lobehub/lobe-chat/pull/1265) 下,现在您可以在 LobeHub 中与本地 LLM (Large Language Model) 进行交流了!🤩
|
||||
云端模型固然强大,但有时你需要数据留在本地。可能是敏感项目,可能是想免去 API 费用做实验,也可能只是希望完全掌控整个技术栈。LobeHub v0.127.0 现已支持 Ollama,无论模型运行在云端还是本地机器,你都能获得一致的对话体验。
|
||||
|
||||
我们非常高兴能在这个特别的时刻,向所有 LobeHub 用户介绍这项革命性的特性。Ollama AI 的集成不仅标志着我们技术上的一个巨大飞跃,更是向用户承诺,我们将不断追求更高效、更智能的沟通方式。
|
||||
无需学习新界面,无需割裂工作流程。将 LobeHub 指向你的 Ollama 实例,即可开始对话。
|
||||
|
||||
## 💡 如何启动与本地 LLM 的对话?
|
||||
## 一行命令连接本地模型
|
||||
|
||||
如果您在私有化部署方面遇到困难,强烈推荐尝试 LobeHub Cloud 服务。我们提供全方位的模型支持,让您轻松开启 AI 对话之旅。
|
||||
|
||||
赶快来体验全新升级的 LobeHub v1.6,感受 GPT-4 带来的强大对话能力!
|
||||
启动过程非常简单。如果你已运行 Ollama,只需一条 Docker 命令即可连接:
|
||||
|
||||
```bash
|
||||
docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://host.docker.internal:11434/v1 lobehub/lobe-chat
|
||||
```
|
||||
|
||||
是的,就是这么简单!🤩 您不需要进行繁杂的配置,也不必担心复杂的安装过程。我们已经为您准备好了一切,只需一行命令,即可开启与本地 AI 的深度对话。
|
||||
仅此而已。LobeHub 会自动检测本地模型,并在你切换 GPT-4、Claude 等模型的同一处列出它们。根据每次对话的需求,自由混用云端和本地模型。
|
||||
|
||||
## 本地模型的适用场景
|
||||
|
||||
- **隐私优先工作**:敏感对话全程留在本地
|
||||
- **成本控制**:实验性使用无需按 token 付费
|
||||
- **离线使用**:无网络连接时仍可继续工作
|
||||
- **模型测试**:生产部署前评估开源模型效果
|
||||
|
||||
## 体验优化与修复
|
||||
|
||||
- 新增 Ollama 端点自动模型发现功能
|
||||
- 修复本地模型兼容性的流式响应处理问题
|
||||
- 优化 Ollama 服务不可达时的错误提示
|
||||
|
||||
## 致谢
|
||||
|
||||
衷心感谢实现 Ollama 集成的[社区贡献者](https://github.com/lobehub/lobe-chat/pull/1265),以及 Ollama 团队打造的易用本地 AI 基础设施。
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
---
|
||||
title: 'LobeHub 1.0: New Architecture and New Possibilities'
|
||||
title: 'LobeHub 1.0: A New Foundation for Persistent, Multi-User Workspaces'
|
||||
description: >-
|
||||
LobeHub 1.0 brings a brand-new architecture and features for server-side
|
||||
databases and user authentication management, opening up new possibilities. On
|
||||
this basis, LobeHub Cloud has entered beta testing.
|
||||
LobeHub 1.0 introduces server-side database support and comprehensive user
|
||||
management, enabling knowledge bases, cross-device sync, and team
|
||||
collaboration. LobeHub Cloud enters beta with these capabilities built-in.
|
||||
tags:
|
||||
- LobeHub
|
||||
- Version 1.0
|
||||
|
|
@ -12,18 +12,51 @@ tags:
|
|||
- Cloud Beta Testing
|
||||
---
|
||||
|
||||
# LobeHub 1.0: New Architecture and New Possibilities
|
||||
# LobeHub 1.0: A New Foundation for Persistent, Multi-User Workspaces
|
||||
|
||||
Since announcing our move towards version 1.0 in March, we’ve been busy upgrading every aspect of our platform. After two months of intensive development, we are excited to announce the official release of LobeHub 1.0! Let’s take a look at our new features.
|
||||
Since March, we've been rebuilding LobeHub from the ground up. Two months later, 1.0 is here. This isn't just an incremental update—it's a new architecture that enables the capabilities users have been asking for most.
|
||||
|
||||
## Server-Side Database Support
|
||||
The 0.x era was defined by browser storage. Fast, simple, but limited. You couldn't sync across devices, build knowledge bases, or share agents with a team. Every session started fresh. LobeHub 1.0 changes that foundation.
|
||||
|
||||
The most significant feature of LobeHub 1.0 is the support for server-side databases. In the 0.x era, the lack of persistent storage on the server side made it challenging, if not impossible, to implement many features that users urgently needed, such as knowledge bases, cross-device synchronization, and private assistant markets.
|
||||
## Server-Side Database: Data That Persists and Travels
|
||||
|
||||
## User Authentication Management
|
||||
The centerpiece of 1.0 is server-side database support. With persistent storage, conversations and agents live beyond a single browser session. Switch from laptop to desktop without losing context. Build up institutional knowledge over time instead of starting from zero.
|
||||
|
||||
In the 0.x era, the most requested feature to be paired with server-side databases was user authentication management. Previously, we had integrated next-auth and Clerk as our authentication solutions. In response to demands for multi-user management, we have restructured the settings interface into a user panel, consolidating relevant user information within the new user interface.
|
||||
This unlocks capabilities that were impossible in 0.x:
|
||||
|
||||
## LobeHub Cloud Beta Testing
|
||||
- **Knowledge bases**: Store documents and reference them across conversations
|
||||
- **Cross-device sync**: Pick up exactly where you left off on any device
|
||||
- **Private agent marketplaces**: Share specialized agents within your team
|
||||
- **Conversation history**: Search and revisit past discussions
|
||||
|
||||
LobeHub Cloud is our commercial version based on the open-source LobeHub, and all the features from version 1.0 are now live in LobeHub Cloud, which has entered beta testing. If you’re interested, you can join our waitlist here. During the beta testing period, a limited number of access slots will be released daily for testing opportunities.
|
||||
## User Management: From Single-Player to Multi-Player
|
||||
|
||||
Alongside the database, 1.0 introduces proper user authentication and management. We've integrated both next-auth and Clerk as authentication providers, giving you flexibility based on your security needs.
|
||||
|
||||
The settings area has been restructured into a dedicated user panel that brings identity, preferences, and access control into one place. This is essential infrastructure for teams. Multiple users can now share a LobeHub instance with proper access boundaries and account management.
|
||||
|
||||
Use the new panel to:
|
||||
|
||||
- Manage account settings and API keys in one place
|
||||
- Configure authentication providers (next-auth or Clerk)
|
||||
- Control workspace access for team members
|
||||
- Switch between personal and team contexts
|
||||
|
||||
## LobeHub Cloud: Managed 1.0, Ready to Use
|
||||
|
||||
LobeHub Cloud is our hosted offering built on the 1.0 architecture. All the capabilities above—server-side persistence, user management, knowledge bases—are available now without any setup.
|
||||
|
||||
We've opened a beta waitlist with limited daily access. If you want to skip self-hosting and start using LobeHub 1.0 immediately, [join the waitlist here](https://lobehub.com).
|
||||
|
||||
## Improvements and fixes
|
||||
|
||||
- Migrated core storage layer from localStorage to PostgreSQL
|
||||
- Added database migration system for seamless upgrades
|
||||
- Implemented session management with secure token handling
|
||||
- Refactored settings UI into dedicated user panel
|
||||
- Added support for multiple authentication providers
|
||||
- Improved initial load performance with server-side rendering
|
||||
|
||||
## Credits
|
||||
|
||||
Huge thanks to everyone who contributed to the 1.0 architecture overhaul. This release represents foundational work by the entire LobeHub team that will support the platform for years to come.
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
---
|
||||
title: LobeHub 1.0:新的架构与新的可能
|
||||
title: LobeHub 1.0:为持久化、多用户协作而生的新架构
|
||||
description: >-
|
||||
LobeHub 1.0 带来了服务端数据库、用户鉴权管理的全新架构与特性,开启了新的可能 。在此基础上, LobeHub Cloud 开启 Beta
|
||||
版测试。
|
||||
LobeHub 1.0 引入服务端数据库支持和完善的用户管理体系,实现知识库、跨设备同步和团队协作能力。
|
||||
LobeHub Cloud 同步开启 Beta 测试,内置全部新特性。
|
||||
tags:
|
||||
- LobeHub
|
||||
- 服务端数据库
|
||||
|
|
@ -10,18 +10,51 @@ tags:
|
|||
- Beta 测试
|
||||
---
|
||||
|
||||
# LobeHub 1.0:新的架构与新的可能
|
||||
# LobeHub 1.0:为持久化、多用户协作而生的新架构
|
||||
|
||||
自从 3 月份宣布迈向 1.0 ,我们就开始着手全方面的升级。经过 2 个月的密集研发,我们很高兴地宣布 LobeHub 1.0 正式发布了!一起来看看我们的全新样貌吧~
|
||||
从三月宣布迈向 1.0 开始,我们从底层彻底重构了 LobeHub。两个月后,1.0 正式到来。这不仅是增量更新,更是全新架构 —— 它实现了用户最迫切需要的核心能力。
|
||||
|
||||
## 服务端数据库支持
|
||||
0.x 时代依赖浏览器存储。快速、简单,但受限。无法跨设备同步、无法构建知识库、无法与团队共享助手。每次会话都从零开始。LobeHub 1.0 改变了这一基础。
|
||||
|
||||
在 LobeHub 1.0 中,最大的特性是支持了服务端数据库。在 0.x 时代,由于缺乏服务端持久化存储,许多用户迫切需要的功能实现困难,或完全无法实现,例如知识库、跨端同步、私有助手市场等等。
|
||||
## 服务端数据库:数据持久化、随时随地访问
|
||||
|
||||
## 用户鉴权管理
|
||||
1.0 的核心是服务端数据库支持。有了持久化存储,对话和助手不再局限于单个浏览器会话。从笔记本切换到台式机,上下文无缝衔接。持续积累知识,而非每次重启。
|
||||
|
||||
在 0.x 时代,和服务端数据库搭配的呼声最高的特性就是用户鉴权管理。在此之前,我们已经接入了 next-auth 和 clerk 作为鉴权解决方案。并针对多用户管理的诉求,将设置界面重构为了用户面板,在新的用户面板中整合了相关的用户信息。
|
||||
这解锁了 0.x 时代无法实现的能力:
|
||||
|
||||
## LobeHub Cloud 开启 Beta 测试
|
||||
- **知识库**:存储文档并在多轮对话中引用
|
||||
- **跨设备同步**:在任何设备上精准续接上次工作
|
||||
- **私有助手市场**:在团队内共享专用助手
|
||||
- **对话历史**:搜索和回顾过往讨论
|
||||
|
||||
LobeHub Cloud 是我们基于 LobeHub 开源版的商业化版本,上述 1.0 的功能在 LobeHub Cloud 中均已上线,目前已开启 Beta 测试。如果你感兴趣,可以在这里加入我们的 waitlist , Beta 测试期间每天都会发放体验名额。
|
||||
## 用户管理:从单人使用到团队协作
|
||||
|
||||
伴随数据库升级,1.0 引入了完善的用户认证与管理。我们同时集成 next-auth 和 Clerk 作为认证提供商,让你根据安全需求灵活选择。
|
||||
|
||||
设置界面已重构为独立的用户面板,将身份、偏好和访问控制统一整合。这是团队协作的基础设施。多用户现在可以在同一 LobeHub 实例中工作,拥有清晰的权限边界和账户管理。
|
||||
|
||||
用户面板支持:
|
||||
|
||||
- 在同一处管理账户设置和 API 密钥
|
||||
- 配置认证提供商(next-auth 或 Clerk)
|
||||
- 控制团队成员的工作空间访问权限
|
||||
- 在个人和团队上下文间切换
|
||||
|
||||
## LobeHub Cloud:开箱即用的托管 1.0
|
||||
|
||||
LobeHub Cloud 是我们基于 1.0 架构打造的托管服务。上述全部能力 —— 服务端持久化、用户管理、知识库 —— 现已无需任何配置即可使用。
|
||||
|
||||
我们已开放 Beta 测试等待名单,每日发放有限体验名额。如果你想跳过自托管流程、立即使用 LobeHub 1.0,[可在此加入等待名单](https://lobehub.com)。
|
||||
|
||||
## 体验优化与修复
|
||||
|
||||
- 核心存储层从 localStorage 迁移至 PostgreSQL
|
||||
- 新增数据库迁移系统,确保平滑升级
|
||||
- 实现带安全令牌处理的会话管理
|
||||
- 重构设置界面为独立用户面板
|
||||
- 新增多认证提供商支持
|
||||
- 通过服务端渲染优化首屏加载性能
|
||||
|
||||
## 致谢
|
||||
|
||||
衷心感谢所有参与 1.0 架构重构的贡献者。本次发布凝聚了整个 LobeHub 团队的基础建设工作,将为平台未来数年的发展提供坚实支撑。
|
||||
|
|
|
|||
|
|
@ -1,9 +1,8 @@
|
|||
---
|
||||
title: 'LobeHub Fully Enters the GPT-4 Era: GPT-4o Mini Officially Launched'
|
||||
title: 'LobeHub v1.6: GPT-4o Mini Joins the Default Lineup'
|
||||
description: >-
|
||||
LobeHub v1.6 has been released with support for GPT-4o mini, while LobeHub
|
||||
Cloud services have been fully upgraded to provide users with a more powerful
|
||||
AI conversation experience.
|
||||
LobeHub v1.6 adds GPT-4o mini support, while LobeHub Cloud upgrades its
|
||||
default model to GPT-4o mini for stronger out-of-the-box conversations.
|
||||
tags:
|
||||
- LobeHub
|
||||
- GPT-4o Mini
|
||||
|
|
@ -11,30 +10,38 @@ tags:
|
|||
- Cloud Service
|
||||
---
|
||||
|
||||
# GPT-4o Mini Makes a Stunning Debut, Ushering in a New GPT-4 Era 🚀
|
||||
# LobeHub v1.6: GPT-4o Mini Joins the Default Lineup
|
||||
|
||||
We are excited to announce that LobeHub v1.6 is now officially released! This update brings thrilling and significant upgrades:
|
||||
OpenAI's full model family has moved to GPT-4. LobeHub v1.6 follows that shift, adding GPT-4o mini to the supported models. For LobeHub Cloud users, this upgrade goes further: GPT-4o mini is now the default, replacing GPT-3.5-turbo.
|
||||
|
||||
## 🌟 Major Updates
|
||||
The result is stronger conversations from your first message, without any configuration changes.
|
||||
|
||||
- **GPT-4o Mini Officially Launched**: OpenAI's entire model lineup has been upgraded to GPT-4
|
||||
- **LobeHub Cloud Service Upgrade**:
|
||||
- GPT-3.5-turbo has been upgraded to GPT-4o Mini as the default model
|
||||
- Providing users with a superior conversation experience
|
||||
## GPT-4o Mini: Capable and Cost-Effective
|
||||
|
||||
## 🎯 Cloud Service Highlights
|
||||
GPT-4o mini brings GPT-4-level intelligence at a smaller scale. It's fast enough for real-time interactions and capable enough for most everyday tasks—drafting, analysis, coding help, and creative work.
|
||||
|
||||
LobeHub Cloud offers you a convenient one-stop AI conversation service:
|
||||
Use GPT-4o mini when you want:
|
||||
|
||||
- 📦 **Ready to Use**: Free registration for immediate experience
|
||||
- 🤖 **Multi-Model Support**:
|
||||
- GPT-4o Mini
|
||||
- GPT-4o
|
||||
- Claude 3.5 Sonnet
|
||||
- Gemini 1.5 Pro
|
||||
- Better reasoning than GPT-3.5 without the latency of full GPT-4o
|
||||
- A cost-effective default for high-volume conversations
|
||||
- Strong performance on instruction following and tool use
|
||||
|
||||
## 💡 Usage Recommendations
|
||||
Switch to full GPT-4o or other providers (Claude 3.5 Sonnet, Gemini 1.5 Pro) when you need maximum capability for complex reasoning tasks.
|
||||
|
||||
If you encounter difficulties with private deployment, we highly recommend trying the LobeHub Cloud service. We provide comprehensive model support to help you easily embark on your AI conversation journey.
|
||||
## Cloud Service: Upgraded Defaults
|
||||
|
||||
Come and experience the newly upgraded LobeHub v1.6, and feel the powerful conversational capabilities brought by GPT-4!
|
||||
For LobeHub Cloud users, the service upgrade is automatic. New conversations start with GPT-4o mini by default. Existing users don't need to change any settings—the model switcher simply shows the new default first.
|
||||
|
||||
Cloud now supports:
|
||||
|
||||
- GPT-4o mini (default)
|
||||
- GPT-4o
|
||||
- Claude 3.5 Sonnet
|
||||
- Gemini 1.5 Pro
|
||||
|
||||
## Improvements and fixes
|
||||
|
||||
- Added GPT-4o mini model configuration and parameter defaults
|
||||
- Updated LobeHub Cloud default model selection logic
|
||||
- Improved model switcher UI to highlight recommended options
|
||||
- Fixed edge cases in streaming responses for newer OpenAI models
|
||||
|
|
|
|||
|
|
@ -1,38 +1,46 @@
|
|||
---
|
||||
title: LobeHub 全面进入 GPT-4 时代:GPT-4o mini 正式上线
|
||||
title: LobeHub v1.6:GPT-4o mini 成为默认模型选项
|
||||
description: >-
|
||||
LobeHub v1.6 重磅发布 GPT-4o mini 支持,同时 LobeHub Cloud 服务全面升级默认模型,为用户带来更强大的 AI
|
||||
对话体验。
|
||||
LobeHub v1.6 新增 GPT-4o mini 支持,同时 LobeHub Cloud 将默认模型升级为
|
||||
GPT-4o mini,让开箱即用的对话体验更进一步。
|
||||
tags:
|
||||
- LobeHub
|
||||
- GPT-4o mini
|
||||
- AI 对话服务
|
||||
---
|
||||
|
||||
# GPT-4o mini 震撼登场,开启全新 GPT-4 时代 🚀
|
||||
# LobeHub v1.6:GPT-4o mini 成为默认模型选项
|
||||
|
||||
我们很高兴地宣布,LobeHub v1.6 现已正式发布!这次更新带来了激动人心的重大升级:
|
||||
OpenAI 全系列模型已升级至 GPT-4 架构。LobeHub v1.6 跟进这一变化,正式支持 GPT-4o mini。对于 LobeHub Cloud 用户,这次升级更进一步:GPT-4o mini 现已成为默认模型,替代 GPT-3.5-turbo。
|
||||
|
||||
## 🌟 主要更新
|
||||
这意味着从第一条消息开始,你就能获得更强的对话能力,无需任何配置调整。
|
||||
|
||||
- **GPT-4o mini 正式上线**:OpenAI 全系列模型实现 GPT-4 升级
|
||||
- **LobeHub Cloud 服务升级**:
|
||||
- GPT-3.5-turbo 升级为 GPT-4o mini 作为默认模型
|
||||
- 为用户带来更优质的对话体验
|
||||
## GPT-4o mini:能力与成本的平衡之选
|
||||
|
||||
## 🎯 Cloud 服务亮点
|
||||
GPT-4o mini 以更小的规模提供 GPT-4 级别的智能。响应速度足以支撑实时交互,能力覆盖大多数日常任务 —— 起草文案、分析数据、编程辅助和创意工作。
|
||||
|
||||
LobeHub Cloud 为您提供便捷的一站式 AI 对话服务:
|
||||
GPT-4o mini 的适用场景:
|
||||
|
||||
- 📦 **开箱即用**:免费注册,即刻体验
|
||||
- 🤖 **多模型支持**:
|
||||
- GPT-4o mini
|
||||
- 比 GPT-3.5 更强的推理能力,同时避免完整 GPT-4o 的延迟
|
||||
- 高频对话的成本优化默认选项
|
||||
- 指令遵循和工具调用方面的稳定表现
|
||||
|
||||
面对复杂推理任务时,可切换至完整 GPT-4o 或其他提供商(Claude 3.5 Sonnet、Gemini 1.5 Pro)获取最强能力。
|
||||
|
||||
## Cloud 服务:默认模型自动升级
|
||||
|
||||
LobeHub Cloud 用户将自动获得此次服务升级。新对话默认以 GPT-4o mini 启动。现有用户无需更改任何设置 —— 模型切换器会自动将新默认选项置顶显示。
|
||||
|
||||
Cloud 现已支持:
|
||||
|
||||
- GPT-4o mini(默认)
|
||||
- GPT-4o
|
||||
- Claude 3.5 Sonnet
|
||||
- Gemini 1.5 Pro
|
||||
|
||||
## 💡 使用建议
|
||||
## 体验优化与修复
|
||||
|
||||
如果您在私有化部署方面遇到困难,强烈推荐尝试 LobeHub Cloud 服务。我们提供全方位的模型支持,让您轻松开启 AI 对话之旅。
|
||||
|
||||
赶快来体验全新升级的 LobeHub v1.6,感受 GPT-4 带来的强大对话能力!
|
||||
- 新增 GPT-4o mini 模型配置和参数默认值
|
||||
- 更新 LobeHub Cloud 默认模型选择逻辑
|
||||
- 优化模型切换器 UI,突出推荐选项
|
||||
- 修复新版 OpenAI 模型流式响应的边界情况
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
---
|
||||
title: LobeHub Database Docker Image Official Release
|
||||
description: >-
|
||||
LobeHub v1.8.0 launches the official database Docker image, supporting cloud
|
||||
data synchronization and user management, along with comprehensive
|
||||
self-deployment documentation.
|
||||
LobeHub v1.8.0 ships the official database Docker image, completing the
|
||||
server-side deployment stack with cloud data sync, user management, and
|
||||
comprehensive self-hosting documentation.
|
||||
tags:
|
||||
- LobeHub
|
||||
- Docker Image
|
||||
|
|
@ -14,27 +14,34 @@ tags:
|
|||
|
||||
# LobeHub Database Docker Image: The Final Piece of the Cloud Deployment Puzzle
|
||||
|
||||
We are excited to announce the official release of the long-awaited database Docker image for LobeHub v1.8.0! This marks a significant milestone in our server database offerings, providing users with a complete cloud deployment solution.
|
||||
With LobeHub v1.8.0, the official database Docker image is now available. This completes the server-side deployment path we have been building, so teams can run a full cloud deployment without gaps in their setup flow.
|
||||
|
||||
## 🚀 Core Features
|
||||
The image ships with Postgres and NextAuth pre-configured. You get cloud data synchronization and flexible authentication out of the box, including support for third-party SSO providers like Auth0.
|
||||
|
||||
- **Lightweight Deployment**: The Docker image is only 90MB, yet offers full database functionality.
|
||||
- **Optimized Performance**: Pre-configured with Server Postgres and NextAuth authentication system to ensure optimal connectivity performance.
|
||||
- **Cloud Synchronization**: Enjoy a seamless cloud data synchronization experience right after deployment.
|
||||
- **Flexible Authentication**: Supports integration with third-party SSO service providers like Auth0.
|
||||
## Lightweight but complete
|
||||
|
||||
## 📘 Upgraded Deployment Documentation
|
||||
The Docker image weighs only 90MB, yet provides a full database environment. We pre-configured Server Postgres and the NextAuth authentication system to ensure stable connectivity from the start. This keeps deployment simple while giving you production-grade foundations.
|
||||
|
||||
To ensure users can complete the deployment smoothly, we have optimized the structure of our deployment documentation:
|
||||
After deployment, cloud data synchronization is available immediately. Your data flows between client and server without additional configuration steps.
|
||||
|
||||
- Clear introduction to the framework concepts
|
||||
## Self-hosting documentation rebuilt
|
||||
|
||||
To make private deployments smoother, we reorganized the deployment documentation with clearer structure:
|
||||
|
||||
- Framework concepts explained up front
|
||||
- Detailed deployment case studies
|
||||
- Comprehensive self-deployment operation guide
|
||||
- Step-by-step self-hosting guide
|
||||
|
||||
You can start deploying your own LobeHub service by visiting the [official documentation](https://lobehub.com/en/docs/self-hosting/server-database).
|
||||
Visit the [official documentation](https://lobehub.com/en/docs/self-hosting/server-database) to start deploying your own LobeHub service.
|
||||
|
||||
## 🔮 Future Outlook
|
||||
## Improvements and fixes
|
||||
|
||||
Our knowledge base feature is also in development, so stay tuned for more exciting updates!
|
||||
- Released official database Docker image (90MB)
|
||||
- Pre-configured Server Postgres and NextAuth authentication
|
||||
- Enabled cloud data synchronization post-deployment
|
||||
- Added Auth0 and third-party SSO provider integration
|
||||
- Reorganized self-hosting documentation with case studies
|
||||
|
||||
This update marks a significant breakthrough for LobeHub in cloud deployment solutions, making private deployment easier than ever. We appreciate the community's patience, and we will continue to strive to provide users with a better experience.
|
||||
## Credits
|
||||
|
||||
Thanks to the community for the patience while we built this part of the stack. We will keep improving the overall deployment experience.
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: LobeHub Database Docker 镜像正式发布
|
||||
description: LobeHub v1.8.0 推出官方数据库 Docker 镜像,支持云端数据同步与用户管理,并提供完整的自部署文档指南。
|
||||
description: LobeHub v1.8.0 推出官方数据库 Docker 镜像,补齐服务端部署链路,支持云端数据同步、用户管理,并提供完整的自部署文档。
|
||||
tags:
|
||||
- LobeHub
|
||||
- Docker 镜像
|
||||
|
|
@ -10,27 +10,34 @@ tags:
|
|||
|
||||
# LobeHub Database Docker 镜像:云端部署的最后一块拼图
|
||||
|
||||
我们很高兴地宣布,LobeHub v1.8.0 正式发布了期待已久的数据库 Docker 镜像!这是我们在服务端数据库领域的重要里程碑,为用户提供了完整的云端部署解决方案。
|
||||
LobeHub v1.8.0 正式发布官方数据库 Docker 镜像。这补齐了我们持续完善的服务端部署链路,让团队可以更顺畅地完成完整的云端部署流程。
|
||||
|
||||
## 🚀 核心特性
|
||||
镜像内置 Postgres 和 NextAuth 预配置。云端数据同步与灵活认证开箱即用,同时支持 Auth0 等第三方 SSO 服务商集成。
|
||||
|
||||
- **轻量级部署**:Docker 镜像仅 90MB,却提供完整的数据库功能
|
||||
- **优化性能**:预置 Server Postgres 与 NextAuth 鉴权体系,确保最佳连通性能
|
||||
- **云端同步**:部署后即可享受流畅的云端数据同步体验
|
||||
- **灵活认证**:支持与 Auth0 等第三方 SSO 服务提供商集成
|
||||
## 轻量而完整
|
||||
|
||||
## 📘 部署文档全新升级
|
||||
Docker 镜像仅 90MB,却提供了完整的数据库环境。我们预置了 Server Postgres 与 NextAuth 鉴权体系,从第一刻起就保证稳定连通。部署流程因此保持简洁,同时提供生产级基础能力。
|
||||
|
||||
为确保用户能够顺利完成部署,我们优化了部署文档的结构:
|
||||
部署完成后,云端数据同步立即可用。客户端与服务端的数据流转无需额外配置。
|
||||
|
||||
- 清晰的框架思路介绍
|
||||
## 自部署文档重构
|
||||
|
||||
为了让私有部署更顺畅,我们重新梳理了部署文档结构:
|
||||
|
||||
- 框架思路前置说明
|
||||
- 详细的部署案例指引
|
||||
- 完整的自部署操作指南
|
||||
|
||||
现在,您可以通过访问 [官方文档](https://lobehub.com/zh/docs/self-hosting/server-database) 开始部署您自己的 LobeHub 服务。
|
||||
访问[官方文档](https://lobehub.com/zh/docs/self-hosting/server-database)开始部署你自己的 LobeHub 服务。
|
||||
|
||||
## 🔮 未来展望
|
||||
## 改进与修复
|
||||
|
||||
我们的知识库功能也正在开发中,敬请期待更多激动人心的更新!
|
||||
- 正式发布官方数据库 Docker 镜像(90MB)
|
||||
- 预置 Server Postgres 与 NextAuth 认证体系
|
||||
- 部署完成后即可启用云端数据同步
|
||||
- 支持 Auth0 及第三方 SSO 服务商集成
|
||||
- 重构自部署文档并补充案例说明
|
||||
|
||||
这次更新标志着 LobeHub 在云端部署方案上的重要突破,让私有部署变得前所未有的简单。感谢社区的耐心等待,我们将继续努力为用户带来更好的体验。
|
||||
## 致谢
|
||||
|
||||
感谢社区在这段建设周期中的耐心等待。我们会继续打磨整体部署体验。
|
||||
|
|
|
|||
|
|
@ -1,11 +1,11 @@
|
|||
---
|
||||
title: >-
|
||||
LobeHub Launches Knowledge Base Feature: A New Experience in Intelligent File
|
||||
Management and Dialogue
|
||||
LobeHub Launches Knowledge Base: Intelligent File Management and Dialogue
|
||||
Experience
|
||||
description: >-
|
||||
LobeHub introduces a brand new knowledge base feature that supports all types
|
||||
of file management, intelligent vectorization, and file dialogue, making
|
||||
knowledge management and information retrieval easier and smarter.
|
||||
LobeHub introduces a brand new knowledge base feature that supports all file
|
||||
types, intelligent vectorization, and file-based dialogue, making knowledge
|
||||
management and information retrieval easier and smarter.
|
||||
tags:
|
||||
- LobeHub
|
||||
- Knowledge Base
|
||||
|
|
@ -14,28 +14,30 @@ tags:
|
|||
- Cloud Version
|
||||
---
|
||||
|
||||
# Major Release of Knowledge Base Feature: A Revolution in Intelligent File Management and Dialogue
|
||||
# Knowledge Base: A New Way to Manage and Chat with Your Files
|
||||
|
||||
We are excited to announce that the highly anticipated LobeHub knowledge base feature is now officially launched! 🎉 This feature is now available in both the open-source version and the Cloud version (LobeHub.com).
|
||||
The LobeHub knowledge base is now live in both the open-source edition and the Cloud edition (LobeHub.com). This release brings file management, intelligent processing, and file-based dialogue into one workflow, so teams can organize knowledge and retrieve information with less friction.
|
||||
|
||||
## A Brand New File Management Experience
|
||||
## A dedicated space for your files
|
||||
|
||||
- 📁 **Dedicated File Access**: A new "Files" primary menu has been added to the left sidebar, providing convenient access and management of files.
|
||||
- 📄 **Support for All File Types**: Upload and store various types of files, including documents, images, audio, and video.
|
||||
- 👀 **Powerful Preview Functionality**: Built-in support for online previews of multiple formats, including PDF, Excel, Word, PPT, and TXT.
|
||||
- 🔄 **Expandable Preview Architecture**: The preview component is built on an open-source solution, allowing for future expansion to support more file types.
|
||||
We added a **Files** entry in the left sidebar that gives uploaded content a permanent home. Instead of juggling files across scattered entry points, you now keep everything in one stable location.
|
||||
|
||||
## Intelligent Knowledge Base Management
|
||||
You can upload and store documents, images, audio, and video. LobeHub provides built-in online preview for common formats including PDF, Excel, Word, PPT, and TXT. The preview component is built on open-source foundations, with an architecture designed to expand to more file formats over time.
|
||||
|
||||
- 📚 **Unlimited Knowledge Bases**: Create an unlimited number of knowledge bases to meet different scenario needs.
|
||||
- 🔍 **Intelligent Vectorization**: Automatically chunk and vectorize files, supporting fragment preview functionality.
|
||||
- 💡 **Innovative Interaction**: Integrate the Portal interaction paradigm for quick preview and retrieval of file content.
|
||||
- 🔮 **Promising Future**: The architecture reserves space for expansion, with plans to support intelligent processing of multimedia files such as audio, images, and video.
|
||||
## Intelligent knowledge organization
|
||||
|
||||
## Convenient User Experience
|
||||
You can create unlimited knowledge bases to match different scenarios, from personal reference collections to domain-focused team libraries.
|
||||
|
||||
- 💪 **Ready to Use**: Supports direct file uploads in the dialogue box, making operations simple and intuitive.
|
||||
- 🎯 **Real-Time Feedback**: An optimized upload experience provides clear progress feedback.
|
||||
- ☁️ **Two Versions Available**: Offers both an open-source self-hosted version and an official Cloud version to meet different user needs.
|
||||
After upload, files are automatically chunked and vectorized, with fragment preview available for faster inspection. The feature also integrates the **Portal** interaction pattern to support quicker preview and retrieval of file content.
|
||||
|
||||
All features are open-sourced on the [GitHub repository](https://github.com/lobehub/lobe-chat). We invite you to visit [LobeHub Cloud](http://LobeHub.com) to experience the full functionality.
|
||||
The current architecture reserves room for future expansion, including planned intelligent processing for multimedia files such as audio, images, and video.
|
||||
|
||||
## Ready to use, wherever you deploy
|
||||
|
||||
Knowledge base features work out of the box:
|
||||
|
||||
- Upload files directly inside the dialogue box with an intuitive flow
|
||||
- Clear progress feedback during uploads
|
||||
- Available in both open-source self-hosted edition and official Cloud edition
|
||||
|
||||
All features are open-sourced on the [GitHub repository](https://github.com/lobehub/lobe-chat). Visit [LobeHub Cloud](http://LobeHub.com) to experience the full functionality.
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: LobeHub 重磅发布知识库功能:打造智能文件管理与对话新体验
|
||||
description: LobeHub 推出全新知识库功能,支持全类型文件管理、智能向量化和文件对话,让知识管理和信息检索更轻松、更智能。
|
||||
title: LobeHub 发布知识库功能:智能文件管理与对话体验
|
||||
description: LobeHub 推出全新知识库功能,支持全类型文件管理、智能向量化与文件对话,让知识整理与信息检索更轻松、更智能。
|
||||
tags:
|
||||
- LobeHub
|
||||
- 知识库
|
||||
|
|
@ -8,28 +8,30 @@ tags:
|
|||
- 智能处理
|
||||
---
|
||||
|
||||
# 知识库功能重磅发布:智能文件管理与对话的革新
|
||||
# 知识库:管理与对话文件的新方式
|
||||
|
||||
我们很高兴地宣布,备受期待的 LobeHub 知识库功能现已正式发布!🎉 该功能已同步在开源版和 Cloud 版(LobeHub.com)中上线。
|
||||
LobeHub 知识库功能现已正式上线,同步提供于开源版与 Cloud 版(LobeHub.com)。这次发布把文件管理、智能处理与文件对话整合到同一流程,让知识整理与信息检索更顺畅。
|
||||
|
||||
## 全新的文件管理体验
|
||||
## 文件专属空间
|
||||
|
||||
- 📁 **专属文件入口**:在左侧边栏新增「文件」一级菜单,提供便捷的文件访问与管理
|
||||
- 📄 **全类型文件支持**:支持文档、图片、音频、视频等各类文件的上传和存储
|
||||
- 👀 **强大的预览功能**:内置支持 PDF、Excel、Word、PPT 和 TXT 等多种格式的在线预览
|
||||
- 🔄 **可扩展的预览架构**:基于开源方案打造的预览组件,支持未来扩展更多文件类型
|
||||
左侧边栏新增 **「文件」** 一级入口,让上传的内容有了固定归宿。文件不再散落在各处入口,而是在统一位置稳定存放。
|
||||
|
||||
## 智能知识库管理
|
||||
你可以上传并存储文档、图片、音频、视频等多种类型。LobeHub 为 PDF、Excel、Word、PPT、TXT 等常见格式提供内置在线预览。预览组件基于开源方案构建,架构上也为后续扩展更多文件类型留出了空间。
|
||||
|
||||
- 📚 **无限知识库**:支持创建无限数量的知识库,满足不同场景需求
|
||||
- 🔍 **智能向量化**:自动进行文件分块和向量化处理,支持片段预览功能
|
||||
- 💡 **交互创新**:集成 Portal 交互范式,实现文件内容的快速预览和检索
|
||||
- 🔮 **未来可期**:架构预留扩展空间,计划支持音频、图片、视频等多媒体文件的智能处理
|
||||
## 智能知识组织
|
||||
|
||||
## 便捷的使用体验
|
||||
你可以按场景创建无限数量的知识库,无论是个人资料沉淀还是团队主题整理,都能灵活组织。
|
||||
|
||||
- 💪 **开箱即用**:支持在对话框直接上传文件,操作简单直观
|
||||
- 🎯 **实时反馈**:优化的上传体验,提供清晰的进度反馈
|
||||
- ☁️ **双版本可选**:提供开源自部署版本和官方 Cloud 版本,满足不同用户需求
|
||||
文件上传后会自动完成分块与向量化处理,并支持片段级预览,帮助你更快定位内容。功能同时集成了 **Portal** 交互范式,用于更高效地预览与检索文件信息。
|
||||
|
||||
当前架构也预留了后续扩展能力,计划支持对音频、图片、视频等多媒体文件的智能处理。
|
||||
|
||||
## 开箱即用,部署无忧
|
||||
|
||||
知识库功能开箱即用:
|
||||
|
||||
- 对话框内直接上传文件,流程直观
|
||||
- 上传过程中进度反馈清晰
|
||||
- 开源自部署版本与官方 Cloud 版本同步提供
|
||||
|
||||
所有功能均已在 [GitHub 仓库](https://github.com/lobehub/lobe-chat) 开源,欢迎访问 [LobeHub Cloud](http://LobeHub.com) 体验完整功能。
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: LobeHub Perfectly Adapts to OpenAI O1 Series Models
|
||||
title: LobeHub Adds Support for OpenAI O1 Series Models
|
||||
description: >-
|
||||
LobeHub v1.17.0 now supports OpenAI's latest o1-preview and o1-mini models,
|
||||
bringing users enhanced coding and mathematical capabilities.
|
||||
|
|
@ -11,27 +11,32 @@ tags:
|
|||
- Mathematical Problem Solving
|
||||
---
|
||||
|
||||
# OpenAI O1 Series Models Now Available on LobeHub
|
||||
# OpenAI O1 Series Models Now Available
|
||||
|
||||
We are excited to announce that LobeHub v1.17.0 fully supports OpenAI's newly launched O1 series models. Whether you are a community edition user or a [Cloud version](https://LobeHub.com) subscriber, you can experience this significant update.
|
||||
LobeHub v1.17.0 now supports OpenAI's newly launched O1 series models. This update is available for both community edition users and [Cloud version](https://LobeHub.com) subscribers.
|
||||
|
||||
## New Model Support
|
||||
The O1 series brings stronger performance in the areas users rely on most: code writing and comprehension, mathematical problem solving, and more precise task execution. You can start using these models immediately without waiting or extra configuration.
|
||||
|
||||
- ✨ OpenAI o1-preview
|
||||
- ✨ OpenAI o1-mini
|
||||
## New model support
|
||||
|
||||
## Enhanced Capabilities
|
||||
This release adds two models from the O1 family:
|
||||
|
||||
The O1 series models excel in the following areas:
|
||||
- **OpenAI o1-preview** — Full reasoning capabilities for complex tasks
|
||||
- **OpenAI o1-mini** — Faster, cost-effective option for everyday work
|
||||
|
||||
- 💻 Code writing and comprehension
|
||||
- 🔢 Mathematical problem solving
|
||||
- 🎯 More precise task execution
|
||||
- ⚡️ Optimized performance
|
||||
## Stronger where you need it
|
||||
|
||||
## Experience It Now
|
||||
The O1 series excels at structured reasoning tasks. Use these models when you need:
|
||||
|
||||
- 🌐 Cloud version subscribers can start using it immediately
|
||||
- 🔧 Self-hosted users can begin experiencing it by updating to v1.17.0
|
||||
- Code that compiles and runs correctly the first time
|
||||
- Step-by-step mathematical derivations with explanations
|
||||
- Precise execution of multi-step instructions
|
||||
|
||||
This update marks an important step for LobeHub in supporting the latest AI models. We look forward to seeing how the O1 series models can help users unlock new possibilities!
|
||||
Both models are available now in the model selector for all LobeHub users.
|
||||
|
||||
## How to start using
|
||||
|
||||
- [Cloud version](https://LobeHub.com) subscribers can use O1 models immediately
|
||||
- Self-hosted users should update to v1.17.0 to access the new models
|
||||
|
||||
With this release, LobeHub continues to track the latest model ecosystem. We look forward to seeing what you build with the O1 series.
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: LobeHub 完美适配 OpenAI O1 系列模型
|
||||
title: LobeHub 支持 OpenAI O1 系列模型
|
||||
description: LobeHub v1.17.0 现已支持 OpenAI 最新发布的 o1-preview 和 o1-mini 模型,为用户带来更强大的代码和数学能力。
|
||||
tags:
|
||||
- OpenAI O1
|
||||
|
|
@ -9,27 +9,32 @@ tags:
|
|||
- 数学问题
|
||||
---
|
||||
|
||||
# OpenAI O1 系列模型现已登陆 LobeHub
|
||||
# OpenAI O1 系列模型现已可用
|
||||
|
||||
我们很高兴地宣布,LobeHub v1.17.0 已完整支持 OpenAI 最新推出的 O1 系列模型。无论是社区版还是 [Cloud 版本](https://LobeHub.com)用户,都可以体验到这一重大更新。
|
||||
LobeHub v1.17.0 已完整支持 OpenAI 最新推出的 O1 系列模型。这次更新同时面向社区版用户与 [Cloud 版本](https://LobeHub.com) 订阅用户开放。
|
||||
|
||||
O1 系列在用户最依赖的场景中表现更强:代码编写与理解、数学问题处理、更精准的任务执行。无需等待或额外配置,这些模型现在即可选用。
|
||||
|
||||
## 新增模型支持
|
||||
|
||||
- ✨ OpenAI o1-preview
|
||||
- ✨ OpenAI o1-mini
|
||||
本次发布加入 O1 家族的两款模型:
|
||||
|
||||
## 增强的能力
|
||||
- **OpenAI o1-preview** — 完整推理能力,适合复杂任务
|
||||
- **OpenAI o1-mini** — 更快、更经济,适合日常工作
|
||||
|
||||
O1 系列模型在以下方面表现出色:
|
||||
## 在需要的地方更强
|
||||
|
||||
- 💻 代码编写与理解
|
||||
- 🔢 数学问题处理
|
||||
- 🎯 更精准的任务执行
|
||||
- ⚡️ 优化的性能表现
|
||||
O1 系列擅长结构化推理任务。当你需要以下能力时选用这些模型:
|
||||
|
||||
## 立即体验
|
||||
- 一次就能编译运行的正确代码
|
||||
- 带讲解的逐步数学推导
|
||||
- 多步骤指令的精准执行
|
||||
|
||||
- 🌐 [Cloud 版本](https://LobeHub.com) 订阅用户现已可以直接使用
|
||||
- 🔧 自部署用户可通过更新至 v1.17.0 开始体验
|
||||
所有 LobeHub 用户现在都可以在模型选择器中找到这两款模型。
|
||||
|
||||
这次更新让 LobeHub 在支持最新 AI 模型方面又迈出了重要一步。我们期待 O1 系列模型能够帮助用户实现更多可能!
|
||||
## 如何开始使用
|
||||
|
||||
- [Cloud 版本](https://LobeHub.com) 订阅用户现已可以直接使用
|
||||
- 自部署用户更新至 v1.17.0 后即可访问新模型
|
||||
|
||||
通过这次发布,LobeHub 继续紧跟最新模型生态。期待 O1 系列帮助你在实际工作中探索更多可能。
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: 'Major Update: LobeHub Enters the Era of Artifacts'
|
||||
title: 'LobeHub Enters the Era of Artifacts'
|
||||
description: >-
|
||||
LobeHub v1.19 brings significant updates, including full feature support for
|
||||
Claude Artifacts, a brand new discovery page design, and support for GitHub
|
||||
|
|
@ -12,53 +12,51 @@ tags:
|
|||
- Interactive Experience
|
||||
---
|
||||
|
||||
# Major Update: LobeHub Enters the Era of Artifacts
|
||||
# LobeHub v1.19: Artifacts, Discovery, and More Models
|
||||
|
||||
We are excited to announce the official release of LobeHub v1.19! This update introduces several important features that elevate the interactive experience of the AI assistant.
|
||||
LobeHub v1.19 is now live. This release brings a major step forward for day-to-day assistant usage, with stronger creation workflows, broader model access, and a redesigned way to discover what you can do next.
|
||||
|
||||
## 🎨 Artifacts Support: Unlocking New Creative Dimensions
|
||||
## Create with Artifacts
|
||||
|
||||
In this version, we have nearly fully replicated the core features of Claude Artifacts. Now, you can experience the following in LobeHub:
|
||||
LobeHub now replicates the core Claude Artifacts experience. You can generate and interact with rich content directly in the conversation:
|
||||
|
||||
- SVG graphic generation and display
|
||||
- HTML page generation and real-time rendering
|
||||
- Document generation in more formats
|
||||
- **SVG graphics** — Generate and display vector graphics that render in real time
|
||||
- **HTML pages** — Create and preview interactive web pages without leaving the chat
|
||||
- **Rich documents** — Produce formatted documents in multiple output formats
|
||||
|
||||
It is worth mentioning that the Python code execution feature has also been developed and will be available in future versions. At that time, users will be able to utilize both Claude Artifacts and OpenAI Code Interpreter, significantly enhancing the practicality of the AI assistant.
|
||||
Python code execution is also in development and planned for a future release. Once available, you will be able to combine Claude Artifacts with OpenAI Code Interpreter, further extending what you can build with your assistant.
|
||||
|
||||

|
||||
|
||||
## 🔍 New Discovery Page: Explore More Possibilities
|
||||
## Redesigned discovery page
|
||||
|
||||
The discovery page has undergone a major upgrade, now featuring a richer variety of content categories:
|
||||
The discovery page has been rebuilt with richer content categories to help you explore AI capabilities more naturally:
|
||||
|
||||
- AI Assistant Marketplace
|
||||
- Plugin Showcase
|
||||
- Model List
|
||||
- Provider Introductions
|
||||
|
||||
This redesign not only increases the information density of the page but also opens a new window for users to explore AI capabilities. In the future, we plan to further expand the functionality of the discovery page, potentially adding:
|
||||
The new layout raises information density while making capabilities easier to browse. We plan to expand this page further with knowledge base sharing, Artifacts showcases, and curated conversation sharing.
|
||||
|
||||
- Knowledge Base Sharing
|
||||
- Artifacts Showcases
|
||||
- Curated Conversation Sharing
|
||||
## GitHub Models provider support
|
||||
|
||||
## 🚀 GitHub Models Support: More Model Choices
|
||||
|
||||
Thanks to community member [@CloudPassenger](https://github.com/CloudPassenger) for their contributions, LobeHub now supports GitHub Models providers. Users simply need to:
|
||||
Thanks to community contributor [@CloudPassenger](https://github.com/CloudPassenger), LobeHub now supports GitHub Models as a provider. To start using it:
|
||||
|
||||
1. Prepare a GitHub Personal Access Token (PAT)
|
||||
2. Configure provider information in the settings
|
||||
2. Configure provider information in settings
|
||||
3. Start using free models available on GitHub Models
|
||||
|
||||
The addition of this feature greatly expands the range of models available to users, providing more options for AI conversations in different scenarios.
|
||||
This significantly expands the available model pool and gives you more choices for different conversation scenarios.
|
||||
|
||||
## 🔜 Future Outlook
|
||||
## Improvements and fixes
|
||||
|
||||
We will continue to focus on enhancing the functionality and user experience of LobeHub. In upcoming versions, we plan to:
|
||||
- Added full Artifacts support for SVG, HTML, and document generation
|
||||
- Redesigned discovery page with marketplace, plugins, models, and providers
|
||||
- Added GitHub Models provider support
|
||||
- Preparing Python code execution feature for upcoming release
|
||||
- Planned expansion of discovery page with knowledge base and Artifacts sharing
|
||||
|
||||
- Improve the Python code execution feature
|
||||
- Add support for more types of Artifacts
|
||||
- Expand the content dimensions of the discovery page
|
||||
## Credits
|
||||
|
||||
Thank you to every user for your support and feedback. Let’s look forward to more surprises from LobeHub together!
|
||||
Huge thanks to community contributor [@CloudPassenger](https://github.com/CloudPassenger) for the GitHub Models integration.
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: 重磅更新:LobeHub 迎来 Artifacts 时代
|
||||
title: '重磅更新:LobeHub 迎来 Artifacts 时代'
|
||||
description: >-
|
||||
LobeHub v1.19 带来了重大更新,包括 Claude Artifacts 完整特性支持、全新的发现页面设计,以及 GitHub Models
|
||||
服务商支持,让 AI 助手的能力得到显著提升。
|
||||
|
|
@ -11,53 +11,51 @@ tags:
|
|||
- GitHub Models
|
||||
---
|
||||
|
||||
# 重磅更新:LobeHub 迎来 Artifacts 时代
|
||||
# LobeHub v1.19:Artifacts、发现页与更多模型
|
||||
|
||||
我们很高兴地宣布 LobeHub v1.19 版本正式发布!这次更新带来了多项重要功能,让 AI 助手的交互体验更上一层楼。
|
||||
LobeHub v1.19 正式发布。这次更新围绕日常使用体验带来了一次关键升级:创作流程更完整、模型选择更丰富、能力发现路径也更清晰。
|
||||
|
||||
## 🎨 Artifacts 支持:解锁全新创作维度
|
||||
## 用 Artifacts 创作
|
||||
|
||||
在这个版本中,我们几乎完整还原了 Claude Artifacts 的核心特性。现在,您可以在 LobeHub 中体验到:
|
||||
LobeHub 现已还原 Claude Artifacts 的核心体验。你可以在对话中直接生成并交互丰富的内容:
|
||||
|
||||
- SVG 图形生成与展示
|
||||
- HTML 页面生成与实时渲染
|
||||
- 更多格式的文档生成
|
||||
- **SVG 图形** — 生成并展示实时渲染的矢量图形
|
||||
- **HTML 页面** — 创建并预览交互式网页,无需离开对话
|
||||
- **富文档** — 以多种输出格式生成格式化文档
|
||||
|
||||
值得一提的是,Python 代码执行功能也已完成开发,将在后续版本中与大家见面。届时,用户将能够同时运用 Claude Artifacts 和 OpenAI Code Interpreter 这两大强大工具,极大提升 AI 助手的实用性。
|
||||
Python 代码执行功能也已完成开发,计划在后续版本上线。届时,你将能够把 Claude Artifacts 与 OpenAI Code Interpreter 结合使用,进一步扩展助手的构建能力。
|
||||
|
||||

|
||||
|
||||
## 🔍 全新发现页面:探索更多可能
|
||||
## 发现页重新设计
|
||||
|
||||
发现页面迎来了重大升级,现在包含更丰富的内容类别:
|
||||
发现页面已完成大幅重构,以更丰富的内容分类帮助你更自然地探索 AI 能力:
|
||||
|
||||
- AI 助手市场
|
||||
- 插件展示
|
||||
- 模型列表
|
||||
- 服务商介绍
|
||||
|
||||
这次改版不仅提升了页面的信息密度,更为用户打开了探索 AI 能力的新窗口。未来,我们计划进一步扩展发现页面的功能,可能会加入:
|
||||
新版布局在提升信息密度的同时,也让能力浏览更直观。我们计划继续扩展发现页面,加入知识库分享、Artifacts 展示、精选对话分享等功能。
|
||||
|
||||
- 知识库分享
|
||||
- Artifacts 展示
|
||||
- 精选对话分享
|
||||
## GitHub Models 服务商支持
|
||||
|
||||
## 🚀 GitHub Models 支持:更多模型选择
|
||||
|
||||
感谢社区成员 [@CloudPassenger](https://github.com/CloudPassenger) 的贡献,现在 LobeHub 已经支持 GitHub Models 服务商。用户只需:
|
||||
感谢社区成员 [@CloudPassenger](https://github.com/CloudPassenger) 的贡献,LobeHub 现已支持 GitHub Models 作为服务商。使用流程如下:
|
||||
|
||||
1. 准备 GitHub Personal Access Token (PAT)
|
||||
2. 在设置中配置服务商信息
|
||||
3. 即可开始使用 GitHub Models 上的免费模型
|
||||
3. 开始使用 GitHub Models 上的免费模型
|
||||
|
||||
这一功能的加入大大扩展了用户可选用的模型范围,为不同场景下的 AI 对话提供了更多选择。
|
||||
这项能力显著扩展了可选模型范围,也为不同场景下的 AI 对话提供了更灵活的选择。
|
||||
|
||||
## 🔜 未来展望
|
||||
## 改进与修复
|
||||
|
||||
我们将持续致力于提升 LobeHub 的功能和用户体验。接下来的版本中,我们计划:
|
||||
- 完整支持 Artifacts:SVG、HTML 和文档生成
|
||||
- 重新设计发现页,整合市场、插件、模型和服务商
|
||||
- 新增 GitHub Models 服务商支持
|
||||
- Python 代码执行功能开发完成,待后续版本上线
|
||||
- 计划扩展发现页,支持知识库与 Artifacts 分享
|
||||
|
||||
- 完善 Python 代码执行功能
|
||||
- 增加更多 Artifacts 类型支持
|
||||
- 扩展发现页面的内容维度
|
||||
## 致谢
|
||||
|
||||
感谢每一位用户的支持与反馈,让我们一起期待 LobeHub 带来更多惊喜!
|
||||
特别感谢社区贡献者 [@CloudPassenger](https://github.com/CloudPassenger) 提供的 GitHub Models 集成。
|
||||
|
|
|
|||
|
|
@ -1,38 +1,41 @@
|
|||
---
|
||||
title: LobeHub Introduces Persistent Assistant Sidebar Feature
|
||||
title: LobeHub Adds a Persistent Agent Sidebar
|
||||
description: >-
|
||||
LobeHub v1.26.0 launches the persistent assistant sidebar feature, supporting
|
||||
quick key switching for easy access to frequently used assistants,
|
||||
significantly enhancing efficiency.
|
||||
LobeHub v1.26.0 introduces a persistent Agent sidebar with shortcut-based
|
||||
switching, synced pinned Agents, and a more focused chat layout.
|
||||
tags:
|
||||
- Persistent Assistant
|
||||
- Sidebar Feature
|
||||
- User Experience
|
||||
- Workflow Optimization
|
||||
- Persistent Agent Sidebar
|
||||
- Agent Switching
|
||||
- Chat Layout
|
||||
- Workflow Efficiency
|
||||
---
|
||||
|
||||
# Persistent Assistant Sidebar: Creating a More Convenient Conversation Experience
|
||||
# Persistent Agent Sidebar: Faster Access, Less Context Switching
|
||||
|
||||
In version v1.26.0, we are excited to introduce a long-awaited new feature — the persistent assistant sidebar. This feature aims to enhance user access to frequently used assistants, making your reliable helpers easily accessible.
|
||||
In v1.26.0, LobeHub adds a persistent Agent sidebar to make high-frequency Agents easier to reach during active conversations. Instead of jumping between lists, you can keep your key Agents visible and switch with less friction.
|
||||
|
||||
## Feature Highlights
|
||||
## Tighter conversation workflow
|
||||
|
||||
- **Quick Switching**: Supports quick switching between different assistants using keyboard shortcuts, making your workflow smoother.
|
||||
- **Space Optimization**: Activating the sidebar automatically hides the conversation list, providing you with a larger conversation area.
|
||||
- **Intelligent Display**: Automatically syncs pinned assistants to the sidebar, ensuring that important assistants are always within view.
|
||||
This update focuses on reducing small interaction costs in everyday chat:
|
||||
|
||||
- **Quick switching** — Use keyboard shortcuts to move between Agents faster
|
||||
- **More chat space** — When the sidebar is active, the conversation list hides to free up room for messages
|
||||
- **Synced pinned Agents** — Your pinned Agents automatically appear in the sidebar so priority items stay in view
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
## How to Use
|
||||
## How to enable
|
||||
|
||||
Currently, this feature is in the experimental stage and is disabled by default. To experience it, you can enable it by adding the environment variable `FEATURE_FLAGS=+pin_list`.
|
||||
This capability is currently experimental and disabled by default. To try it in self-hosted setups, enable it with:
|
||||
|
||||
We have already enabled this feature in the Cloud version, and we welcome all users to try it out and provide feedback. You can share your experiences in [GitHub Discussions](https://github.com/lobehub/lobe-chat/discussions/4515) to help us refine this feature further.
|
||||
```
|
||||
FEATURE_FLAGS=+pin_list
|
||||
```
|
||||
|
||||
## Design Philosophy
|
||||
The Cloud version already has this enabled. If you try it, share feedback in [GitHub Discussions](https://github.com/lobehub/lobe-chat/discussions/4515) so we can keep improving the experience.
|
||||
|
||||
The core goal of this update is to optimize work efficiency. By effectively utilizing the sidebar space, we make frequently used assistants easily accessible while hiding the conversation list to expand the conversation area, providing users with a more focused dialogue experience.
|
||||
## Why we built it
|
||||
|
||||
We hope this new feature will significantly enhance your user experience. Welcome to upgrade to version v1.26.0 and start experiencing it!
|
||||
The goal of this release is straightforward: improve focus and reduce small interaction costs in everyday chat. By keeping frequently used Agents close at hand and reclaiming screen space for messages, the interface supports longer, more concentrated conversations.
|
||||
|
|
|
|||
|
|
@ -1,34 +1,39 @@
|
|||
---
|
||||
title: LobeHub 新增助手常驻侧边栏功能
|
||||
description: LobeHub v1.26.0 推出助手常驻侧边栏功能,支持快捷键切换,让高频使用的助手触手可及,大幅提升使用效率。
|
||||
title: LobeHub 新增助理常驻侧边栏
|
||||
description: LobeHub v1.26.0 带来助理常驻侧边栏,支持快捷键切换、自动同步置顶助理,并提供更专注的对话布局。
|
||||
tags:
|
||||
- 助手常驻侧边栏
|
||||
- 对话体验
|
||||
- 助理常驻侧边栏
|
||||
- 助理切换
|
||||
- 对话布局
|
||||
- 工作效率
|
||||
---
|
||||
|
||||
# 助手常驻侧边栏:打造更便捷的对话体验
|
||||
# 助理常驻侧边栏:更快触达,减少切换成本
|
||||
|
||||
我们在 v1.26.0 版本中推出了一项期待已久的新功能 —— 助手常驻侧边栏。这项功能旨在提升用户对高频助手的访问体验,让您的得力助手触手可及。
|
||||
在 v1.26.0 中,LobeHub 新增了助理常驻侧边栏,让高频使用的助理在对话过程中更容易触达。你不需要在多个列表间来回跳转,就能更顺畅地切换到常用助理。
|
||||
|
||||
## 功能亮点
|
||||
## 更紧凑的对话流程
|
||||
|
||||
- **快捷切换**:支持通过快捷键快速切换不同助手,让工作流更加流畅
|
||||
- **空间优化**:激活侧边栏时会自动隐藏会话列表,为您腾出更大的对话空间
|
||||
- **智能显示**:将置顶助手自动同步到侧边栏,让重要助手始终在视线范围内
|
||||
这次更新主要围绕减少日常对话中的细碎操作:
|
||||
|
||||
- **快捷切换** — 支持通过快捷键在不同助理间快速切换
|
||||
- **更多对话空间** — 启用侧边栏后,会话列表自动隐藏,为聊天区域腾出更多空间
|
||||
- **置顶自动同步** — 已置顶的助理自动出现在侧边栏,重点对象始终可见
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
## 如何使用
|
||||
## 如何开启
|
||||
|
||||
目前这项功能处于实验阶段,默认未开启。如需体验,您可以通过添加环境变量 `FEATURE_FLAGS=+pin_list` 来启用。
|
||||
这项能力目前仍处于实验阶段,默认关闭。若你在自托管环境中想提前体验,可通过以下环境变量开启:
|
||||
|
||||
我们已在 Cloud 版本中同步开启此功能,欢迎所有用户体验并提供反馈。您可以在 [GitHub Discussions](https://github.com/lobehub/lobe-chat/discussions/4515) 中分享使用感受,帮助我们将这个功能打磨得更加完善。
|
||||
```
|
||||
FEATURE_FLAGS=+pin_list
|
||||
```
|
||||
|
||||
## 设计理念
|
||||
Cloud 版本已默认开启。欢迎在 [GitHub Discussions](https://github.com/lobehub/lobe-chat/discussions/4515) 分享你的使用反馈,帮助我们继续完善这个体验。
|
||||
|
||||
这次更新的核心目标是优化工作效率。通过合理利用侧边栏空间,我们让高频使用的助手触手可及,同时通过隐藏会话列表来扩大对话区域,为用户带来更专注的对话体验。
|
||||
## 我们为什么做这个改动
|
||||
|
||||
我们期待这项新功能能够显著提升您的使用体验。欢迎升级到 v1.26.0 版本开始体验!
|
||||
这个版本的目标很明确:提升专注度,减少日常对话中的细碎操作成本。把常用助理放在手边,并把更多界面空间留给消息内容后,连续对话会更稳定、更沉浸。
|
||||
|
|
|
|||
|
|
@ -1,28 +1,35 @@
|
|||
---
|
||||
title: LobeHub Supports Sharing Conversations in Text Format (Markdown/JSON)
|
||||
title: Export Conversations as Markdown or OpenAI JSON
|
||||
description: >-
|
||||
LobeHub v1.28.0 introduces support for exporting conversations in Markdown and
|
||||
OpenAI format JSON, making it easy to convert conversation content into note
|
||||
materials, development debugging data, and training corpora, significantly
|
||||
enhancing the reusability of conversation content.
|
||||
LobeHub v1.28.0 adds Markdown and OpenAI-format JSON exports, making it
|
||||
easier to turn conversations into documentation, debugging payloads, or
|
||||
training datasets.
|
||||
tags:
|
||||
- Text Format Export
|
||||
- Markdown Export
|
||||
- OpenAI JSON
|
||||
---
|
||||
|
||||
# Upgraded Conversation Sharing: Support for Text Format Export
|
||||
# Export Conversations as Markdown or OpenAI JSON
|
||||
|
||||
In the latest version v1.28.0, we have launched the text format export feature for conversation content, now supporting exports in both Markdown and OpenAI format JSON.
|
||||
Version 1.28.0 makes conversation exports more useful. You can now download chats as **Markdown** for documentation or **OpenAI-format JSON** for debugging and training. No more copy-pasting or reformatting by hand.
|
||||
|
||||
The Markdown export feature meets users' needs for directly using conversation content in note-taking and document writing. You can easily save valuable conversation content and manage it across various note-taking applications for reuse.
|
||||
## Better documentation workflows
|
||||
|
||||
The Markdown export turns conversations into clean, readable documents. Writers can capture useful exchanges and move them directly into notes, wikis, or reports without manual cleanup.
|
||||
|
||||

|
||||
|
||||
Additionally, we support exporting conversations in JSON format that complies with OpenAI messages specifications. This format can be used directly for API debugging and serves as high-quality training data for models.
|
||||
## API-ready JSON for developers
|
||||
|
||||
The JSON export follows the OpenAI messages format. This means you can drop exported conversations directly into API debugging tools or use them as structured training data for fine-tuning.
|
||||
|
||||
Tool calling data is preserved in its original structure, which helps when analyzing or improving how agents use external tools.
|
||||
|
||||

|
||||
|
||||
It is particularly noteworthy that we retain the original data of Tools Calling within the conversation, which is crucial for enhancing the model's tool invocation capabilities.
|
||||
## Improvements and fixes
|
||||
|
||||
This update greatly expands the sharing and application scenarios for conversation content, and we hope these new features will enhance your user experience.
|
||||
- Added Markdown export option for conversation sharing
|
||||
- Added OpenAI-format JSON export with full message structure
|
||||
- Tool calling data preserved in original format during export
|
||||
|
|
|
|||
|
|
@ -1,26 +1,34 @@
|
|||
---
|
||||
title: LobeHub 支持分享对话为文本格式(Markdown/JSON)
|
||||
title: 支持导出对话为 Markdown 或 OpenAI JSON 格式
|
||||
description: >-
|
||||
LobeHub v1.28.0 新增 Markdown 和 OpenAI 格式 JSON
|
||||
导出支持,让对话内容能轻松转化为笔记素材、开发调试数据和训练语料,显著提升对话内容的复用价值。
|
||||
LobeHub v1.28.0 新增 Markdown 与 OpenAI 格式 JSON 导出,方便将对话转为文档、
|
||||
调试数据或训练语料。
|
||||
tags:
|
||||
- 对话内容
|
||||
- Markdown导出
|
||||
- 文本格式导出
|
||||
- Markdown 导出
|
||||
- OpenAI JSON
|
||||
---
|
||||
|
||||
# 对话内容分享升级:支持文本格式导出
|
||||
# 支持导出对话为 Markdown 或 OpenAI JSON 格式
|
||||
|
||||
我们在最新版本 v1.28.0 中推出了对话内容的文本格式导出功能,现在支持将对话内容导出为 Markdown 和 OpenAI 格式的 JSON 两种格式。
|
||||
v1.28.0 让对话导出更加实用。现在可以将聊天内容下载为 **Markdown** 用于文档整理,或 **OpenAI 格式 JSON** 用于调试与训练,无需手动复制粘贴。
|
||||
|
||||
Markdown 格式导出功能满足了用户将对话内容直接用于笔记和文档撰写的需求。您可以轻松地将有价值的对话内容保存下来,并在各类笔记软件中进行管理和复用。
|
||||
## 更顺畅的文档工作流
|
||||
|
||||
Markdown 导出将对话转为整洁可读的文档。写作者可以快速收录有价值的交流,直接导入笔记、知识库或报告,省去手动排版。
|
||||
|
||||

|
||||
|
||||
同时,我们还支持将对话导出为符合 OpenAI messages 规范的 JSON 格式。这种格式不仅可以直接用于 API 调试,还能作为高质量的模型训练语料。
|
||||
## 开发者友好的 API 格式
|
||||
|
||||
JSON 导出遵循 OpenAI messages 规范。导出的对话可直接用于 API 调试工具,或作为结构化训练数据用于模型微调。
|
||||
|
||||
Tool Calling 数据保持原始结构,便于分析和改进 Agent 调用外部工具的行为。
|
||||
|
||||

|
||||
|
||||
特别值得一提的是,我们会完整保留对话中的 Tools Calling 原始数据,这对提升模型的工具调用能力具有重要价值。
|
||||
## 体验优化
|
||||
|
||||
这次更新让对话内容的分享和应用场景得到了极大扩展,期待这些新功能能够提升您的使用体验。
|
||||
- 新增 Markdown 导出选项
|
||||
- 新增 OpenAI 格式 JSON 导出,保留完整消息结构
|
||||
- 导出时保留 Tool Calling 原始数据
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
---
|
||||
title: New Model Providers Added to LobeHub in November
|
||||
title: November Update - Four New Model Providers
|
||||
description: >-
|
||||
LobeHub model providers now support Gitee AI, InternLM (ShuSheng PuYu), xAI,
|
||||
and Cloudflare WorkersAI
|
||||
LobeHub now supports Gitee AI, InternLM, xAI, and Cloudflare Workers AI,
|
||||
giving teams more options when choosing where to run models.
|
||||
tags:
|
||||
- LobeHub
|
||||
- AI Model Providers
|
||||
|
|
@ -12,15 +12,27 @@ tags:
|
|||
- Cloudflare Workers AI
|
||||
---
|
||||
|
||||
# New Model Providers Added to LobeHub in November 🎉
|
||||
# November Update - Four New Model Providers
|
||||
|
||||
We're excited to announce that LobeHub has expanded its AI model support with the following providers:
|
||||
This month's provider expansion adds four new options for running models. Teams can now choose from a wider range of hosting environments and compare outputs across different ecosystems without switching platforms.
|
||||
|
||||
- **Gitee AI**: [https://ai.gitee.com](https://ai.gitee.com)
|
||||
- **InternLM**: [https://internlm.intern-ai.org.cn](https://internlm.intern-ai.org.cn)
|
||||
- **xAI**: [https://x.ai](https://x.ai)
|
||||
- **Cloudflare Workers AI**: [https://developers.cloudflare.com/workers-ai](https://developers.cloudflare.com/workers-ai)
|
||||
## New providers available
|
||||
|
||||
## Need More Model Providers?
|
||||
The following providers are now supported:
|
||||
|
||||
Feel free to submit your requests at [More Model Provider Support](https://github.com/lobehub/lobe-chat/discussions/6157).
|
||||
- **[Gitee AI](https://ai.gitee.com)** - Model hosting with Chinese developer ecosystem integration
|
||||
- **[InternLM](https://internlm.intern-ai.org.cn)** - Open-source large language models from Shanghai AI Laboratory
|
||||
- **[xAI](https://x.ai)** - Grok and other models from xAI
|
||||
- **[Cloudflare Workers AI](https://developers.cloudflare.com/workers-ai)** - Edge-deployed models running on Cloudflare's global network
|
||||
|
||||
## What this enables
|
||||
|
||||
Use these additions to:
|
||||
|
||||
- Route requests to providers based on latency, cost, or compliance requirements
|
||||
- Compare model behavior across different hosting environments
|
||||
- Deploy closer to users with edge-hosted options
|
||||
|
||||
## Feedback
|
||||
|
||||
Need additional providers? Tell us which ones to prioritize in [GitHub Discussions](https://github.com/lobehub/lobe-chat/discussions/6157).
|
||||
|
|
|
|||
|
|
@ -1,24 +1,38 @@
|
|||
---
|
||||
title: LobeHub 11 月新增模型服务
|
||||
description: 'LobeHub 模型服务新增支持 Gitee AI, InternLM (书生浦语), xAI, Cloudflare WorkersAI'
|
||||
title: 11 月更新 - 新增 4 家模型服务商
|
||||
description: >-
|
||||
LobeHub 新增支持 Gitee AI、InternLM、xAI 和 Cloudflare Workers AI,
|
||||
为团队提供更多模型接入选择。
|
||||
tags:
|
||||
- LobeHub
|
||||
- AI模型服务
|
||||
- AI 模型服务
|
||||
- Gitee AI
|
||||
- InternLM
|
||||
- xAI
|
||||
- Cloudflare Workers AI
|
||||
---
|
||||
|
||||
# LobeHub 11 月新增模型服务支持 🎉
|
||||
# 11 月更新 - 新增 4 家模型服务商
|
||||
|
||||
我们很高兴地宣布,LobeHub 在 11 月份新增了以下 AI 模型服务的支持:
|
||||
本月的服务商扩展新增了 4 个模型运行选项。团队现在可以从更广泛的托管环境中选择,无需切换平台即可对比不同生态的模型输出效果。
|
||||
|
||||
- **Gitee AI**: [https://ai.gitee.com](https://ai.gitee.com)
|
||||
- **InternLM (书生浦语)**: [https://internlm.intern-ai.org.cn](https://internlm.intern-ai.org.cn)
|
||||
- **xAI**: [https://x.ai](https://x.ai)
|
||||
- **Cloudflare Workers AI**: [https://developers.cloudflare.com/workers-ai](https://developers.cloudflare.com/workers-ai)
|
||||
## 新增服务商
|
||||
|
||||
## 需要更多模型服务?
|
||||
本次支持的服务商:
|
||||
|
||||
欢迎在 [更多模型服务商支持](https://github.com/lobehub/lobe-chat/discussions/6157) 提交您的需求。
|
||||
- **[Gitee AI](https://ai.gitee.com)** - 面向中国开发者生态的模型托管服务
|
||||
- **[InternLM](https://internlm.intern-ai.org.cn)** - 上海人工智能实验室开源大语言模型
|
||||
- **[xAI](https://x.ai)** - xAI 提供的 Grok 及其他模型
|
||||
- **[Cloudflare Workers AI](https://developers.cloudflare.com/workers-ai)** - 运行在 Cloudflare 全球网络上的边缘部署模型
|
||||
|
||||
## 使用场景
|
||||
|
||||
利用这些新增选项可以:
|
||||
|
||||
- 按延迟、成本或合规要求选择服务商
|
||||
- 对比不同托管环境下的模型表现
|
||||
- 通过边缘托管选项就近部署
|
||||
|
||||
## 反馈
|
||||
|
||||
还需要其他服务商支持?在 [GitHub Discussions](https://github.com/lobehub/lobe-chat/discussions/6157) 告诉我们优先支持哪些。
|
||||
|
|
|
|||
|
|
@ -1,30 +1,29 @@
|
|||
---
|
||||
title: LobeHub Supports Branching Conversations
|
||||
title: Branch Conversations from Any Message
|
||||
description: >-
|
||||
LobeHub now allows you to create new conversation branches from any message,
|
||||
freeing your thoughts.
|
||||
Create a new conversation branch from any message and choose whether to
|
||||
continue with context or start fresh.
|
||||
tags:
|
||||
- Branching Conversations
|
||||
- LobeHub
|
||||
- Chat Features
|
||||
---
|
||||
|
||||
# Exciting Launch of Branching Conversations Feature 🎉
|
||||
# Branch Conversations from Any Message
|
||||
|
||||
We are thrilled to announce that LobeHub has introduced a brand new branching conversations feature, making your conversation experience smoother and more natural:
|
||||
Conversations rarely stay linear. You follow a tangent, want to test a different approach, or need to split the thread for different audiences. Now you can create a branch from any message and explore alternative paths without losing the original flow.
|
||||
|
||||
## Key Features
|
||||
## Two ways to branch
|
||||
|
||||
- **Message Branching**: Create new conversation branches from any message
|
||||
- **Dual Mode Switching**:
|
||||
- Continuation Mode: Maintain the original context to continue the discussion
|
||||
- Standalone Mode: Start a completely new topic based on the selected message
|
||||
When you create a branch, choose how to handle context:
|
||||
|
||||
## How to Use
|
||||
- **Continuation mode**: Keep the full conversation history and continue from that point. Useful when you want to explore "what if" scenarios while preserving context.
|
||||
- **Standalone mode**: Start fresh from the selected message. Good for extracting a clean sub-conversation or preparing a focused excerpt to share.
|
||||
|
||||
1. Click the "Create Branch" button on the right side of any message
|
||||
2. Start a new conversation branch
|
||||
## How it works
|
||||
|
||||
## Feedback and Suggestions
|
||||
Click the branch button on any message to start a new thread. The original conversation stays intact while you explore the new direction. Switch between branches from the topic sidebar to compare outcomes or continue whichever path proves most useful.
|
||||
|
||||
If you have any suggestions or thoughts about the branching conversations feature, feel free to share your feedback with us in the [Feature Feedback](https://github.com/lobehub/lobe-chat/discussions).
|
||||
## Feedback
|
||||
|
||||
Have ideas for improving conversation branching? Share them in [GitHub Discussions](https://github.com/lobehub/lobe-chat/discussions).
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: LobeHub 支持分支对话
|
||||
description: LobeHub 现已支持从任意消息创建新的对话分支,让您的思维不再受限
|
||||
title: 从任意消息创建对话分支
|
||||
description: 支持从任意消息创建对话分支,可选择延续上下文或开启全新话题
|
||||
tags:
|
||||
- LobeHub
|
||||
- 分支对话
|
||||
|
|
@ -8,22 +8,21 @@ tags:
|
|||
- 用户体验
|
||||
---
|
||||
|
||||
# 重磅推出分支对话功能 🎉
|
||||
# 从任意消息创建对话分支
|
||||
|
||||
我们很高兴地宣布,LobeHub 推出了全新的分支对话功能,让您的对话体验更加流畅自然:
|
||||
对话很少保持线性。你可能会顺着某个话题深入,想尝试不同思路,或需要将线程拆分给不同对象。现在可以从任意消息创建分支,在探索替代路径的同时不丢失原对话的完整性。
|
||||
|
||||
## 核心特性
|
||||
## 两种分支方式
|
||||
|
||||
- **消息分支**: 支持在任意消息处创建新的对话分支
|
||||
- **双模式切换**:
|
||||
- 延续模式 (Continuation): 保持原有上下文继续探讨
|
||||
- 独立模式 (Standalone): 基于选定消息开启全新话题
|
||||
创建分支时可选择如何处理上下文:
|
||||
|
||||
- **延续模式**:保留完整对话历史,从该点继续。适合在保留上下文的同时探索「如果这样做会怎样」的场景。
|
||||
- **独立模式**:从选定消息开始全新的对话。适合提取干净的子对话,或准备专注的节选用于分享。
|
||||
|
||||
## 使用方式
|
||||
|
||||
1. 在任意消息右侧点击「创建分支」按钮
|
||||
2. 开始新的对话分支
|
||||
点击任意消息上的分支按钮即可开启新线程。原对话保持完整,你可以在探索新方向时随时返回。通过话题侧边栏在不同分支间切换,对比结果或继续最有价值的路径。
|
||||
|
||||
## 反馈建议
|
||||
## 反馈
|
||||
|
||||
如果您对分支对话功能有任何建议或想法,欢迎在 [功能反馈](https://github.com/lobehub/lobe-chat/discussions) 中与我们交流。
|
||||
对分支对话有改进建议?欢迎在 [GitHub Discussions](https://github.com/lobehub/lobe-chat/discussions) 分享。
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
---
|
||||
title: LobeHub Supports User Data Statistics and Activity Sharing
|
||||
title: Personal Statistics and Activity Sharing
|
||||
description: >-
|
||||
LobeHub now supports multi-dimensional user data statistics and activity
|
||||
sharing
|
||||
Review your AI usage patterns with multi-dimensional statistics and generate
|
||||
shareable activity cards.
|
||||
tags:
|
||||
- LobeHub
|
||||
- User Statistics
|
||||
|
|
@ -10,23 +10,24 @@ tags:
|
|||
- AI Data
|
||||
---
|
||||
|
||||
# User Data Statistics and Activity Sharing 💯
|
||||
# Personal Statistics and Activity Sharing
|
||||
|
||||
Want to know about your activity performance on LobeHub?
|
||||
Want to see how you have been using LobeHub? The new statistics page gives you a clear view of your activity patterns, from agent usage to message volume. You can also generate a shareable activity card to post or send to friends.
|
||||
|
||||
Now, you can comprehensively understand your AI data through the statistics feature, and even generate personal activity sharing images to share your LobeHub activity with friends.
|
||||
## What you can track
|
||||
|
||||
## 📊 Data Statistics
|
||||
The statistics dashboard shows:
|
||||
|
||||
- **Statistics**: Number of Assistants / Topics / Messages / Total Word Count
|
||||
- **Rankings**:
|
||||
- Model Usage Rate `Top 10`
|
||||
- Assistant Usage Rate `Top 10`
|
||||
- Topic Content Volume `Top 10`
|
||||
- **Heat Map**: Activity distribution over the past year
|
||||
- **User Activity Sharing**: Generate personal activity sharing images
|
||||
- **Overview counts**: Agents created, topics started, messages sent, and total words generated
|
||||
- **Top lists**: Your most-used models, favorite agents, and longest topics
|
||||
- **Activity heatmap**: A visual timeline of your usage over the past year
|
||||
|
||||
## 👉 How to Use
|
||||
Use this to understand your own patterns, identify which agents you rely on most, or simply see how your usage has grown over time.
|
||||
|
||||
1. Requires `PgLite` or `Database` mode
|
||||
2. Click on your profile picture to enter "Account" - "Data Statistics" page
|
||||
## Share your activity
|
||||
|
||||
Generate a personal activity image that summarizes your stats in a clean, shareable format. Post it to social media or share with teammates to compare AI workflows.
|
||||
|
||||
## Getting started
|
||||
|
||||
Statistics are available in **PgLite** and **Database** deployment modes. Open your account menu, go to **Data Statistics**, and your dashboard will load automatically.
|
||||
|
|
|
|||
|
|
@ -1,29 +1,30 @@
|
|||
---
|
||||
title: LobeHub 支持用户数据统计与活跃度分享
|
||||
description: LobeHub 现已支持多维度用户数据统计与活跃度分享
|
||||
title: 个人数据统计与活跃度分享
|
||||
description: 多维度统计 AI 使用情况,生成可分享的活跃度卡片
|
||||
tags:
|
||||
- 用户数据统计
|
||||
- 活跃度分享
|
||||
- LobeHub
|
||||
---
|
||||
|
||||
# 用户数据统计与活跃度分享 💯
|
||||
# 个人数据统计与活跃度分享
|
||||
|
||||
想要了解自己在 LobeHub 上的活跃度表现吗?
|
||||
想了解自己在 LobeHub 上的使用情况?全新的统计页面让你清楚掌握使用模式,从 Agent 使用到消息量一目了然。还可以生成可分享的活跃度卡片,发送给朋友或发布到社交媒体。
|
||||
|
||||
现在,您可以通过数据统计功能,全方位了解自己的 AI 数据,还可以生成个人活跃度分享图片,与好友分享您在 LobeHub 上的活跃度。
|
||||
## 可追踪的数据
|
||||
|
||||
## 📊 数据统计
|
||||
统计面板展示:
|
||||
|
||||
- **数据统计**: 助手数 / 话题数 / 消息数 / 累计字数
|
||||
- **排行版**:
|
||||
- 模型使用率 `Top 10`
|
||||
- 助手使用率 `Top 10`
|
||||
- 话题内容量 `Top 10`
|
||||
- **热力图**: 过去一年内的活跃度分布
|
||||
- **用户活跃度分享**: 生成个人活跃度分享图片
|
||||
- **概览数据**:创建的助理数、发起的话题数、发送的消息数、累计生成字数
|
||||
- **排行榜**:你最常用的模型、偏好的助理、内容最多的话题
|
||||
- **活跃度热力图**:过去一年使用情况的视觉时间线
|
||||
|
||||
## 👉 使用方式
|
||||
利用这些数据了解自己的使用模式,识别最常用的助理,或观察使用习惯随时间的变化。
|
||||
|
||||
1. 需要使用 `PgLite` 或 `数据库` 模式
|
||||
2. 点击个人头像进入「账户管理」-「数据统计」页面
|
||||
## 分享你的活跃度
|
||||
|
||||
生成个人活跃度图片,以简洁的格式汇总你的数据。发布到社交媒体或与 teammates 分享,对比 AI 工作流。
|
||||
|
||||
## 开始使用
|
||||
|
||||
统计功能支持 **PgLite** 和 **数据库** 部署模式。打开账户菜单,进入**数据统计**,面板将自动加载。
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
---
|
||||
title: LobeHub Launches New AI Provider Management System
|
||||
title: Custom AI Provider Management
|
||||
description: >-
|
||||
LobeHub has revamped its AI Provider Management System, now supporting custom
|
||||
AI providers and models.
|
||||
A rebuilt provider management system that lets you add, edit, and configure
|
||||
custom AI providers and models to match your workflow.
|
||||
tags:
|
||||
- LobeHub
|
||||
- AI Provider
|
||||
|
|
@ -10,16 +10,23 @@ tags:
|
|||
- Multimodal
|
||||
---
|
||||
|
||||
# New AI Provider Management System 🎉
|
||||
# Custom AI Provider Management
|
||||
|
||||
We are excited to announce that LobeHub has launched a brand new AI Provider Management System, now available in both the open-source version and the Cloud version ([LobeHub.com](https://LobeHub.com)):
|
||||
The provider management system has been rebuilt from the ground up. Now you can add your own AI providers, edit existing ones, and configure custom models with specific capability settings. Available in both the open-source edition and [LobeHub Cloud](https://LobeHub.com).
|
||||
|
||||
## 🚀 Key Updates
|
||||
## Full control over your providers
|
||||
|
||||
- 🔮 **Custom AI Providers**: You can now add, remove, or edit AI providers as needed.
|
||||
- ⚡️ **Custom Model and Capability Configuration**: Easily add your own models to meet personalized requirements.
|
||||
- 🌈 **Multimodal Support**: The new AI Provider Management System fully supports various modalities, including language, images, voice, and more. Stay tuned for video and music generation features!
|
||||
Previously, you were limited to a fixed set of pre-configured providers. The new system lets you:
|
||||
|
||||
## 📢 Feedback and Support
|
||||
- **Add custom providers**: Connect to any OpenAI-compatible API endpoint
|
||||
- **Edit existing setups**: Modify provider settings without waiting for updates
|
||||
- **Configure custom models**: Define your own model entries with specific capabilities (vision, function calling, file support)
|
||||
- **Remove unused providers**: Clean up your list to focus on what you actually use
|
||||
|
||||
If you have any suggestions or thoughts about the new AI Provider Management System, feel free to engage with us in GitHub Discussions.
|
||||
## Multimodal ready
|
||||
|
||||
The system supports language, image, and voice capabilities today. Configure which models can process images, which support tool calling, and which handle file uploads. Video and music generation support is planned for upcoming releases.
|
||||
|
||||
## Feedback
|
||||
|
||||
Have suggestions for the provider management system? Join the conversation in [GitHub Discussions](https://github.com/lobehub/lobe-chat/discussions).
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: LobeHub 推出全新 AI Provider 管理系统
|
||||
description: LobeHub 焕新全新 AI Provider 管理系统,已支持自定义 AI 服务商与自定义模型
|
||||
title: 自定义 AI Provider 管理
|
||||
description: 全新的 Provider 管理系统,支持添加、编辑和配置自定义 AI 服务商与模型
|
||||
tags:
|
||||
- LobeHub
|
||||
- AI Provider
|
||||
|
|
@ -8,16 +8,23 @@ tags:
|
|||
- 多模态
|
||||
---
|
||||
|
||||
# 全新 AI Provider 管理系统 🎉
|
||||
# 自定义 AI Provider 管理
|
||||
|
||||
我们很高兴地宣布,LobeHub 推出了全新的 AI Provider 管理系统,已经在开源版与 Cloud 版([LobeHub.com](https://LobeHub.com))中可用:
|
||||
Provider 管理系统已彻底重构。现在你可以添加自己的 AI 服务商,编辑现有配置,并设置自定义模型的具体能力参数。开源版和 [LobeHub Cloud](https://LobeHub.com) 均已上线。
|
||||
|
||||
## 🚀 主要更新
|
||||
## 完全掌控服务商
|
||||
|
||||
- 🔮 **自定义 AI 服务商**: 现在,您可以根据需要添加、删除或编辑 AI 服务商。
|
||||
- ⚡️ **自定义模型与能力配置**: 轻松添加您自己的模型,满足个性化需求。
|
||||
- 🌈 **多模态支持**: 新的 AI Provider 管理系统全面支持多种模态,包括语言、图像、语音等,视频和音乐生成功能,敬请期待!
|
||||
过去只能使用固定的预配置服务商。新系统让你:
|
||||
|
||||
## 📢 反馈与支持
|
||||
- **添加自定义服务商**:连接任何 OpenAI 兼容的 API 端点
|
||||
- **编辑现有配置**:随时修改服务商设置,无需等待更新
|
||||
- **配置自定义模型**:定义自己的模型条目,设置具体能力(视觉、函数调用、文件支持)
|
||||
- **移除不用的服务商**:清理列表,聚焦于实际使用的选项
|
||||
|
||||
如果您对新的 AI Provider 管理系统有任何建议或想法,欢迎在 GitHub Discussions 中与我们交流。
|
||||
## 多模态就绪
|
||||
|
||||
系统当前支持语言、图像和语音能力。可配置哪些模型支持图像处理、工具调用和文件上传。视频和音乐生成支持将在后续版本推出。
|
||||
|
||||
## 反馈
|
||||
|
||||
对 Provider 管理系统有建议?欢迎在 [GitHub Discussions](https://github.com/lobehub/lobe-chat/discussions) 参与讨论。
|
||||
|
|
|
|||
|
|
@ -1,33 +1,35 @@
|
|||
---
|
||||
title: >-
|
||||
LobeHub Integrates DeepSeek R1, Bringing a Revolutionary Chain of Thought
|
||||
Experience
|
||||
description: >-
|
||||
LobeHub v1.49.12 fully supports the DeepSeek R1 model, providing users with an
|
||||
unprecedented interactive experience in the chain of thought.
|
||||
title: DeepSeek R1 Integration with Chain-of-Thought Transparency
|
||||
description: LobeHub now supports DeepSeek R1 with real-time reasoning display, making complex problem-solving more transparent and easier to follow.
|
||||
tags:
|
||||
- LobeHub
|
||||
- DeepSeek
|
||||
- Chain of Thought
|
||||
---
|
||||
|
||||
# Perfect Integration of DeepSeek R1 and it's Deep Thinking Experience 🎉
|
||||
# DeepSeek R1 Integration with Chain-of-Thought Transparency
|
||||
|
||||
After nearly 10 days of meticulous refinement, LobeHub has fully integrated the DeepSeek R1 model in version v1.49.12, offering users a revolutionary interactive experience in the chain of thought!
|
||||
LobeHub v1.49.12 now supports DeepSeek R1 across both Community and Cloud editions. This integration brings the model's reasoning process into full view, so you can follow how complex questions are solved step by step.
|
||||
|
||||
## 🚀 Major Updates
|
||||
## Transparent reasoning in every conversation
|
||||
|
||||
- 🤯 **Comprehensive Support for DeepSeek R1**: Now fully integrated in both the Community and Cloud versions ([LobeHub.com](https://LobeHub.com)).
|
||||
- 🧠 **Real-Time Chain of Thought Display**: Transparently presents the AI's reasoning process, making the resolution of complex issues clear and visible.
|
||||
- ⚡️ **Deep Thinking Experience**: Utilizing Chain of Thought technology, it provides more insightful AI conversations.
|
||||
- 💫 **Intuitive Problem Analysis**: Makes the analysis of complex issues clear and easy to understand.
|
||||
DeepSeek R1's chain-of-thought capability is now fully integrated. When the model works through a problem, you see its reasoning unfold in real time rather than receiving only the final answer.
|
||||
|
||||
## 🌟 How to Use
|
||||
In practice, this means debugging a script, working through a math problem, or analyzing a complex topic becomes more collaborative. You can see where the model's logic aligns with your intent and where it might need clarification. The reasoning display appears naturally in the conversation flow without cluttering the interface.
|
||||
|
||||
1. Upgrade to LobeHub v1.49.12 or visit [LobeHub.com](https://LobeHub.com).
|
||||
2. Select the DeepSeek R1 model in the settings.
|
||||
3. Experience a whole new level of intelligent conversation!
|
||||
## How to use DeepSeek R1
|
||||
|
||||
## 📢 Feedback and Support
|
||||
1. Upgrade to LobeHub v1.49.12 or visit [LobeHub.com](https://LobeHub.com)
|
||||
2. Select DeepSeek R1 from the model dropdown
|
||||
3. Start a conversation and watch the reasoning appear as the model thinks through your request
|
||||
|
||||
If you encounter any issues while using the application or have suggestions for new features, feel free to engage with us through GitHub Discussions. Let's work together to create a better LobeHub!
|
||||
## Improvements and fixes
|
||||
|
||||
- Fixed reasoning content parsing for consistent display across different response types
|
||||
- Improved chain-of-thought rendering performance for long reasoning sequences
|
||||
|
||||
## Credits
|
||||
|
||||
Thanks to the community members who contributed to this release:
|
||||
|
||||
@arvinxx @hezhijie0327 @lobehub-team
|
||||
|
|
|
|||
|
|
@ -1,29 +1,35 @@
|
|||
---
|
||||
title: LobeHub 重磅集成 DeepSeek R1,带来革命性思维链体验
|
||||
description: LobeHub v1.49.12 已完整支持 DeepSeek R1 模型,为用户带来前所未有的思维链交互体验
|
||||
title: DeepSeek R1 集成与思维链透明化
|
||||
description: LobeHub 现已支持 DeepSeek R1 并实时展示推理过程,让复杂问题的求解过程更加透明、易于理解。
|
||||
tags:
|
||||
- DeepSeek R1
|
||||
- CoT
|
||||
- 思维链
|
||||
---
|
||||
|
||||
# 完美集成 DeepSeek R1 ,开启思维链新体验
|
||||
# DeepSeek R1 集成与思维链透明化
|
||||
|
||||
经过近 10 天的精心打磨,LobeHub 已在 v1.49.12 版本中完整集成了 DeepSeek R1 模型,为用户带来革命性的思维链交互体验!
|
||||
LobeHub v1.49.12 已在社区版与 Cloud 版完整接入 DeepSeek R1。这次集成将模型的推理过程完整呈现,让你可以一步步跟进复杂问题是如何被拆解和解决的。
|
||||
|
||||
## 🚀 重大更新
|
||||
## 每次对话中的透明推理
|
||||
|
||||
- 🤯 **DeepSeek R1 全面支持**: 现已在社区版与 Cloud 版([LobeHub.com](https://LobeHub.com))中完整接入
|
||||
- 🧠 **实时思维链展示**: 透明呈现 AI 的推理过程,让复杂问题的解决过程清晰可见
|
||||
- ⚡️ **深度思考体验**: 通过 Chain of Thought 技术,带来更具洞察力的 AI 对话
|
||||
- 💫 **直观的问题解析**: 让复杂问题的分析过程变得清晰易懂
|
||||
DeepSeek R1 的思维链能力现已完整集成。当模型处理问题时,你能实时看到它的思考过程,而不只是收到最终答案。
|
||||
|
||||
## 🌟 使用方式
|
||||
在实际使用中,无论是调试脚本、解数学题,还是分析复杂话题,这种透明性让对话更具协作感。你可以看到模型的逻辑在哪些环节与你的意图一致,在哪些地方可能需要补充说明。推理展示自然融入对话流程,不会干扰界面阅读。
|
||||
|
||||
1. 升级到 LobeHub v1.49.12 或访问 [LobeHub.com](https://LobeHub.com)
|
||||
2. 在设置中选择 DeepSeek R1 模型
|
||||
3. 开启全新的智能对话体验!
|
||||
## 如何使用 DeepSeek R1
|
||||
|
||||
## 📢 反馈与支持
|
||||
1. 升级至 LobeHub v1.49.12 或访问 [LobeHub.com](https://LobeHub.com)
|
||||
2. 在模型选择下拉菜单中选取 DeepSeek R1
|
||||
3. 开始对话,观察模型在思考过程中的推理展示
|
||||
|
||||
如果您在使用过程中遇到任何问题,或对新功能有任何建议,欢迎通过 GitHub Discussions 与我们交流。让我们一起打造更好的 LobeHub!
|
||||
## 体验优化与修复
|
||||
|
||||
- 修复推理内容解析,在不同响应类型中保持一致的展示效果
|
||||
- 优化长推理序列的思维链渲染性能
|
||||
|
||||
## Credits
|
||||
|
||||
感谢为本次版本做出贡献的社区成员:
|
||||
|
||||
@arvinxx @hezhijie0327 @lobehub-team
|
||||
|
|
|
|||
|
|
@ -1,30 +1,50 @@
|
|||
---
|
||||
title: "A Major AI Ecosystem Upgrade: 50+ Models and 10+ Providers Added \U0001F680"
|
||||
description: >-
|
||||
LobeHub v1.49.12 fully supports the DeepSeek R1 model, bringing an
|
||||
unprecedented chain-of-thought experience.
|
||||
title: "50+ New Models and 10+ Providers Added to the Ecosystem"
|
||||
description: LobeHub expands its AI ecosystem with 50+ new models and 10+ providers, making it easier to access diverse AI capabilities without changing your workflow.
|
||||
tags:
|
||||
- DeepSeek R1
|
||||
- CoT
|
||||
- LobeHub
|
||||
- Model Providers
|
||||
- Chain of Thought
|
||||
---
|
||||
|
||||
# Seamless DeepSeek R1 Integration, Unlock a New Chain-of-Thought Experience
|
||||
# 50+ New Models and 10+ Providers Added to the Ecosystem
|
||||
|
||||
LobeHub completed its largest AI ecosystem expansion ever in February, delivering a more powerful and flexible AI chat experience.
|
||||
LobeHub completed its largest AI ecosystem expansion this February. The goal is straightforward: give you access to more capable models across more providers without breaking your existing workflow or requiring new setup steps.
|
||||
|
||||
## 🌟 Major Updates
|
||||
## Expanded provider and reasoning coverage
|
||||
|
||||
- 🔮 A fully expanded provider lineup: added 10+ mainstream AI providers, covering leading global and domestic platforms
|
||||
- 🧠 Full reasoning-model integration: real-time chain-of-thought display for next-gen reasoning models like Claude 3.7 and OpenAI o3-mini, with improved DeepSeek R1 parsing across platforms
|
||||
- 🌐 Online search revamped: integrated SearchXNG and Perplexity Search, supports deep web crawling; native search support for Gemini 2.0 and the Qwen series
|
||||
This release adds 10+ mainstream providers spanning global and domestic platforms. You can now connect to a wider range of services directly from your existing LobeHub setup.
|
||||
|
||||
## 📊 Big Model Library Update
|
||||
DeepSeek R1 is now fully supported, and reasoning-model compatibility has expanded to include Claude 3.7 Sonnet and OpenAI o3-mini. These models display their chain-of-thought in real time, so you can follow how conclusions are reached. DeepSeek R1 parsing is consistent across providers, making reasoning output easier to read in daily use.
|
||||
|
||||
Updated 50+ model configurations, including:
|
||||
## Rebuilt search capabilities
|
||||
|
||||
- OpenAI gpt-4.5-preview
|
||||
- Claude 3.7 Sonnet & Haiku 3.5
|
||||
- Gemini 2.0 series improvements
|
||||
- The latest models from Moonshot, Tongyi Qwen, MiniMax, and other major domestic platforms
|
||||
- Model refreshes across Perplexity, Cloudflare, SiliconFlow, and more
|
||||
Online search has been upgraded with SearchXNG and Perplexity integration, plus support for deep web crawling. Gemini 2.0 and Qwen series models can now use native search as part of their reasoning process.
|
||||
|
||||
Use these updates to:
|
||||
|
||||
- Research topics with live web results directly in chat
|
||||
- Compare how different providers handle the same search query
|
||||
- Access deeper content extraction without leaving the conversation
|
||||
|
||||
## Model library refresh
|
||||
|
||||
This release updates 50+ model configurations to keep your options current:
|
||||
|
||||
- OpenAI: gpt-4.5-preview
|
||||
- Anthropic: Claude 3.7 Sonnet, Haiku 3.5
|
||||
- Google: Gemini 2.0 series improvements
|
||||
- Domestic providers: Moonshot, Tongyi Qwen, MiniMax latest models
|
||||
- Additional platforms: Perplexity, Cloudflare, SiliconFlow refreshes
|
||||
|
||||
## Improvements and fixes
|
||||
|
||||
- Fixed reasoning display inconsistencies across model providers
|
||||
- Improved search result formatting for better readability
|
||||
- Enhanced model parameter synchronization across providers
|
||||
|
||||
## Credits
|
||||
|
||||
Huge thanks to these contributors:
|
||||
|
||||
@AmAzing129 @hezhijie0327 @arvinxx @lobehub-team
|
||||
|
|
|
|||
|
|
@ -1,28 +1,50 @@
|
|||
---
|
||||
title: "全面升级 AI 生态,50+ 模型与 10+ 服务商加入 \U0001F680"
|
||||
description: LobeHub v1.49.12 已完整支持 DeepSeek R1 模型,为用户带来前所未有的思维链交互体验
|
||||
title: "AI 生态扩展:新增 50+ 模型与 10+ 服务商"
|
||||
description: LobeHub 完成史上最大规模 AI 生态扩展,新增 50+ 模型和 10+ 服务商,让你无需改变工作流程即可接入更多 AI 能力。
|
||||
tags:
|
||||
- DeepSeek R1
|
||||
- CoT
|
||||
- LobeHub
|
||||
- 模型服务商
|
||||
- 思维链
|
||||
---
|
||||
|
||||
# 完美集成 DeepSeek R1 ,开启思维链新体验
|
||||
# AI 生态扩展:新增 50+ 模型与 10+ 服务商
|
||||
|
||||
LobeHub 在二月完成了史上最大规模的 AI 生态扩展,带来更强大、更灵活的 AI 对话体验。
|
||||
LobeHub 在二月完成了史上最大规模的 AI 生态扩展。目标很简单:在不破坏现有工作流程、无需额外配置的前提下,让你能接入更多服务商和更强的模型。
|
||||
|
||||
## 🌟 重大更新
|
||||
## 服务商与推理能力扩展
|
||||
|
||||
- 🔮 AI 服务商矩阵全面扩充:新增 10+ 个主流 AI 提供商,覆盖全球与国内主流平台
|
||||
- 🧠 推理模型全面接入:支持 Claude 3.7、OpenAI o3-mini 等新一代推理模型的思维链实时展示,优化 DeepSeek R1 多平台解析
|
||||
- 🌐 在线搜索能力革新:集成 SearchXNG、Perplexity 搜索,支持网页深度爬取,Gemini 2.0、Qwen 系列支持原生搜索
|
||||
本次新增 10+ 个主流服务商,覆盖全球与国内平台。你可以直接从现有的 LobeHub 设置中连接到更广泛的服务。
|
||||
|
||||
## 📊 模型库大更新
|
||||
DeepSeek R1 现已完整支持,推理模型兼容性也扩展到 Claude 3.7 Sonnet 和 OpenAI o3-mini。这些模型会实时展示思维链,让你可以跟进结论的推导过程。跨服务商场景下 DeepSeek R1 的解析表现更加一致,推理内容易于阅读。
|
||||
|
||||
更新 50+ 个模型配置,包括:
|
||||
## 搜索能力重构
|
||||
|
||||
- OpenAI gpt-4.5-preview
|
||||
- Claude 3.7 Sonnet & Haiku 3.5
|
||||
- Gemini 2.0 系列优化
|
||||
- 月之暗面、通义千问、MiniMax 等国内平台最新模型
|
||||
- Perplexity、Cloudflare、硅基流动等平台模型刷新
|
||||
在线搜索已升级,集成 SearchXNG 和 Perplexity,支持深度网页爬取。Gemini 2.0 和 Qwen 系列模型现可在推理过程中使用原生搜索能力。
|
||||
|
||||
利用这些更新,你可以:
|
||||
|
||||
- 在对话中直接通过实时网页结果进行主题研究
|
||||
- 对比不同服务商如何处理同一搜索查询
|
||||
- 无需离开对话即可获取更深度的内容提取
|
||||
|
||||
## 模型库刷新
|
||||
|
||||
本次更新 50+ 个模型配置,保持你的选择与时俱进:
|
||||
|
||||
- OpenAI:gpt-4.5-preview
|
||||
- Anthropic:Claude 3.7 Sonnet、Haiku 3.5
|
||||
- Google:Gemini 2.0 系列优化
|
||||
- 国内服务商:月之暗面、通义千问、MiniMax 最新模型
|
||||
- 其他平台:Perplexity、Cloudflare、硅基流动等刷新
|
||||
|
||||
## 体验优化与修复
|
||||
|
||||
- 修复跨服务商的推理展示不一致问题
|
||||
- 改进搜索结果格式,提升可读性
|
||||
- 增强模型参数跨服务商同步
|
||||
|
||||
## Credits
|
||||
|
||||
感谢以下贡献者:
|
||||
|
||||
@AmAzing129 @hezhijie0327 @arvinxx @lobehub-team
|
||||
|
|
|
|||
|
|
@ -1,34 +1,64 @@
|
|||
---
|
||||
title: 'Hotkey Settings, Data Export, and Multiple Optimizations ⚡'
|
||||
description: >-
|
||||
LobeHub v1.49.12 fully supports the DeepSeek R1 model, bringing an
|
||||
unprecedented chain-of-thought experience.
|
||||
title: "Customizable Hotkeys, Data Export, and Provider Expansion"
|
||||
description: LobeHub adds customizable hotkeys, data export functionality, and expands provider support to make daily workflows smoother and more portable.
|
||||
tags:
|
||||
- LobeHub Hotkeys
|
||||
- CoT
|
||||
- Chain of Thought
|
||||
- LobeHub
|
||||
- Hotkeys
|
||||
- Data Export
|
||||
---
|
||||
|
||||
# Seamless DeepSeek R1 Integration, Unlock a New Chain-of-Thought Experience
|
||||
# Customizable Hotkeys, Data Export, and Provider Expansion
|
||||
|
||||
In March, LobeHub continued to refine the user experience—adding practical features like customizable hotkeys and data export, while further expanding the AI provider ecosystem.
|
||||
This March release focuses on everyday workflow improvements. Custom hotkeys let you set shortcuts that match your habits. Data export makes your conversations portable. New providers and better reasoning display round out the update.
|
||||
|
||||
## 🌟 Key Updates
|
||||
## Keyboard shortcuts that match your habits
|
||||
|
||||
- ⚡ Customizable hotkeys: customize keyboard shortcuts to create a personalized workflow
|
||||
- 💾 Data export: export data from PgLite and PostgreSQL for better data safety and portability
|
||||
- 🔮 Provider expansion: added Xinference, Cohere, Search1API, Infini-AI, PPIO, and more
|
||||
- 🧠 Reasoning model improvements: a reasoning-content selector and enhanced chain-of-thought rendering for models like Claude 3.7 and DeepSeek R1
|
||||
- 🌐 Web crawling enhancements: special support for YouTube, Reddit, and WeChat Official Account links; improved short-content crawling; added a Search1API crawler implementation
|
||||
Hotkeys are now customizable. You can remap common actions to keys that feel natural for your workflow rather than learning a preset layout.
|
||||
|
||||
## 📊 Model Library Update
|
||||
New shortcuts have also been added for clearing conversations and deleting messages. If you prefer keyboard-driven workflows, these additions reduce the need to reach for the mouse during fast-paced sessions.
|
||||
|
||||
Added multiple mainstream AI models, including Google's Gemini 2.5 Pro Experimental and Gemini 2.0 Flash variants; Anthropic context caching support; OpenAI's gpt-4o-mini-tts voice model; DeepSeek-V3-0324 and Hunyuan-T1-Latest; plus QwQ, QVQ-Max, and Baidu Wenxin ernie-x1-32k-preview.
|
||||
## Export your data for backup and migration
|
||||
|
||||
## 💫 Experience Improvements
|
||||
Data export is now available for both PgLite and PostgreSQL backends. You can create backups of your conversations or migrate data between installations without manual database operations.
|
||||
|
||||
- UI refinements: revamped Drawer styles, improved editing scroll experience, and support for screenshot sharing to clipboard
|
||||
- Search enhancements: online search for non-function-calling models (e.g., DeepSeek R1); built-in web search for Wenxin and Hunyuan
|
||||
- File handling: added EPUB chunking support and improved PDF processing
|
||||
- Performance: refactored the Agent Runtime implementation, optimized database core code, and fixed LiteLLM streaming usage statistics
|
||||
- Stability fixes: addressed multiple issues including theme flicker, knowledge base, and WeChat login
|
||||
This matters most when you want to:
|
||||
|
||||
- Archive important conversation histories
|
||||
- Move your setup to a new device or server
|
||||
- Keep a local backup alongside cloud storage
|
||||
|
||||
## Expanded provider and reasoning support
|
||||
|
||||
Six new providers join the lineup: Xinference, Cohere, Search1API, Infini-AI, and PPIO. Each integrates into the existing provider selection flow without requiring additional configuration steps.
|
||||
|
||||
Reasoning models now have a dedicated content selector, and chain-of-thought display has been improved for Claude 3.7 and DeepSeek R1. The reasoning content is easier to read and navigate during long conversations.
|
||||
|
||||
## Better web crawling for diverse content
|
||||
|
||||
Web crawling now handles more content types with dedicated support for YouTube, Reddit, and WeChat Official Account links. Short content extraction has been improved, and Search1API provides an additional crawler implementation for better coverage.
|
||||
|
||||
## Model library updates
|
||||
|
||||
This release brings in several notable additions:
|
||||
|
||||
- Google Gemini 2.5 Pro Experimental and Gemini 2.0 Flash variants
|
||||
- Anthropic context caching support
|
||||
- OpenAI gpt-4o-mini-tts voice model
|
||||
- DeepSeek-V3-0324
|
||||
- Hunyuan-T1-Latest
|
||||
- QwQ, QVQ-Max
|
||||
- Baidu Wenxin ernie-x1-32k-preview
|
||||
|
||||
## Improvements and fixes
|
||||
|
||||
- UI refinements: revamped Drawer styles, improved editor scrolling, clipboard screenshot sharing
|
||||
- Search enhancements: online search now works with non-function-calling models including DeepSeek R1; built-in web search added for Wenxin and Hunyuan
|
||||
- File handling: EPUB chunking support, improved PDF processing
|
||||
- Performance: refactored Agent Runtime, optimized database core code, fixed LiteLLM streaming statistics
|
||||
- Stability: resolved theme flickering, knowledge base behavior issues, and WeChat login problems
|
||||
|
||||
## Credits
|
||||
|
||||
Huge thanks to these contributors:
|
||||
|
||||
@AmAzing129 @arvinxx @hezhijie0327 @tjx666 @lobehub-team
|
||||
|
|
|
|||
|
|
@ -1,32 +1,64 @@
|
|||
---
|
||||
title: 快捷键设置、数据导出与多项功能优化 ⚡
|
||||
description: LobeHub v1.49.12 已完整支持 DeepSeek R1 模型,为用户带来前所未有的思维链交互体验
|
||||
title: "快捷键自定义、数据导出与服务商扩展"
|
||||
description: LobeHub 新增快捷键自定义、数据导出功能,并扩展服务商支持,让日常使用更顺手、数据更可迁移。
|
||||
tags:
|
||||
- LobeHub 快捷键
|
||||
- CoT
|
||||
- 思维链
|
||||
- LobeHub
|
||||
- 快捷键
|
||||
- 数据导出
|
||||
---
|
||||
|
||||
# 完美集成 DeepSeek R1 ,开启思维链新体验
|
||||
# 快捷键自定义、数据导出与服务商扩展
|
||||
|
||||
LobeHub 在三月持续优化用户体验,新增快捷键自定义、数据导出等实用功能,并扩展 AI 服务商生态。
|
||||
三月的版本聚焦于日常体验的打磨。快捷键现在可以自定义,按你的习惯设置。数据导出让对话记录可以随身携带。新增服务商和优化的推理展示进一步完善了这次更新。
|
||||
|
||||
## 🌟 重要更新
|
||||
## 符合习惯的键盘快捷键
|
||||
|
||||
- ⚡ 快捷键自定义:支持自定义键盘快捷键,打造个性化操作体验
|
||||
- 💾 数据导出功能:支持 PGlite 和 PostgreSQL 数据导出,数据安全更有保障
|
||||
- 🔮 AI 服务商扩展:新增 Xinference、Cohere、Search1API、Infini-AI、PPIO 等服务商
|
||||
- 🧠 推理模型优化:支持推理内容选择器,优化 Claude 3.7、DeepSeek R1 等模型的思维链展示
|
||||
- 🌐 网页爬取增强:特别支持 YouTube、Reddit、微信公众号链接,优化短内容爬取,新增 Search1API 爬虫实现
|
||||
快捷键现已支持自定义。你可以将常用操作映射到自己觉得顺手的按键上,而不必去适应预设的键位布局。
|
||||
|
||||
## 📊 模型库更新
|
||||
同时新增了清除对话和删除消息的快捷键。如果你偏好键盘驱动的工作流,这些新增快捷键能减少在快节奏会话中伸手去够鼠标的次数。
|
||||
|
||||
新增了多个主流 AI 模型,包括 Google 的 Gemini 2.5 Pro Experimental 和 Gemini 2.0 Flash 系列变体,Anthropic 支持上下文缓存功能,OpenAI 的 gpt-4o-mini-tts 语音模型, DeepSeek-V3-0324 和 Hunyuan-T1-Latest,以及 QwQ、QVQ-Max、文心 ernie-x1-32k-preview 等模型。
|
||||
## 数据导出用于备份与迁移
|
||||
|
||||
## 💫 体验优化
|
||||
PgLite 和 PostgreSQL 后端现已支持数据导出。你可以备份对话记录,或在不同安装环境之间迁移数据,无需手动操作数据库。
|
||||
|
||||
- 界面改进:重构 Drawer 样式,优化编辑滚动体验,支持截图分享到剪贴板
|
||||
- 搜索增强:支持非函数调用模型(如 DeepSeek R1)使用在线搜索,Wenxin、Hunyuan 支持内置网络搜索
|
||||
- 文件处理:新增 EPUB 文件分块支持,优化 PDF 处理
|
||||
- 性能提升:重构 Agent Runtime 实现,优化数据库核心代码,修复 LiteLLM 流式使用统计
|
||||
- 稳定性修复:解决主题闪烁、知识库、微信登录等多个问题
|
||||
这在以下场景尤其有用:
|
||||
|
||||
- 归档重要的对话历史
|
||||
- 将设置迁移到新设备或服务器
|
||||
- 在云端存储之外保留本地备份
|
||||
|
||||
## 服务商与推理支持扩展
|
||||
|
||||
六个新服务商加入:Xinference、Cohere、Search1API、Infini-AI、PPIO。每个都集成到现有的服务商选择流程中,无需额外配置步骤。
|
||||
|
||||
推理模型现在拥有专用的内容选择器,Claude 3.7 和 DeepSeek R1 的思维链展示也得到了优化。推理内容在长对话中更易阅读和定位。
|
||||
|
||||
## 更完善的网页爬取能力
|
||||
|
||||
网页爬取现在能处理更多内容类型,特别支持 YouTube、Reddit 和微信公众号链接。短内容提取已优化,Search1API 提供了额外的爬虫实现以获得更好的覆盖。
|
||||
|
||||
## 模型库更新
|
||||
|
||||
本次更新带来多个值得关注的新增模型:
|
||||
|
||||
- Google Gemini 2.5 Pro Experimental 和 Gemini 2.0 Flash 系列
|
||||
- Anthropic 上下文缓存支持
|
||||
- OpenAI gpt-4o-mini-tts 语音模型
|
||||
- DeepSeek-V3-0324
|
||||
- Hunyuan-T1-Latest
|
||||
- QwQ、QVQ-Max
|
||||
- 百度文心 ernie-x1-32k-preview
|
||||
|
||||
## 体验优化与修复
|
||||
|
||||
- 界面改进:重构 Drawer 样式、优化编辑器滚动、支持截图分享至剪贴板
|
||||
- 搜索增强:在线搜索现支持非函数调用模型(包括 DeepSeek R1);为文心和混元增加内置网页搜索
|
||||
- 文件处理:新增 EPUB 分块支持,优化 PDF 处理
|
||||
- 性能提升:重构 Agent Runtime,优化数据库核心代码,修复 LiteLLM 流式统计
|
||||
- 稳定性修复:解决主题闪烁、知识库行为异常、微信登录等问题
|
||||
|
||||
## Credits
|
||||
|
||||
感谢以下贡献者:
|
||||
|
||||
@AmAzing129 @arvinxx @hezhijie0327 @tjx666 @lobehub-team
|
||||
|
|
|
|||
|
|
@ -1,37 +1,60 @@
|
|||
---
|
||||
title: Brand-New Design Style and Desktop App Release ✨
|
||||
description: >-
|
||||
LobeHub officially launches the desktop app, delivering a more modern and
|
||||
smoother experience.
|
||||
title: "Lobe UI v2 Design System and Desktop App Launch"
|
||||
description: LobeHub launches a refreshed visual design with Lobe UI v2 and officially releases the desktop app for Windows and macOS.
|
||||
tags:
|
||||
- Desktop App
|
||||
- LobeHub
|
||||
- Chain of Thought
|
||||
- Lobe UI v2
|
||||
---
|
||||
|
||||
# Brand-New Design Style and Desktop App Release ✨
|
||||
# Lobe UI v2 Design System and Desktop App Launch
|
||||
|
||||
In April, LobeHub shipped a major visual upgrade with the brand-new Lobe UI v2 design system, and officially released the desktop app—bringing a more modern and fluid experience.
|
||||
April brings two major changes you can see and touch: a complete visual refresh powered by the new Lobe UI v2 design system, and the official release of the LobeHub desktop app. Together they make daily usage feel more modern, more fluid, and better suited to desktop workflows.
|
||||
|
||||
## 🌟 Major Updates
|
||||
## A refreshed visual foundation
|
||||
|
||||
- 🎨 New design system: upgraded to Lobe UI v2 for a more modern interface and interaction experience
|
||||
- 💻 Official desktop release: native features like Windows/macOS system tray and window controls for a more convenient desktop workflow
|
||||
- 🔌 MCP protocol enhancements: supports Streamable HTTP MCP servers, improves stdio MCP server installation, and adds environment variable parameter support
|
||||
- 🔍 Search expansion: adds Search1API as a search provider, and improves SearXNG category and time-range selection
|
||||
- 🔑 SSO expansion: adds Keycloak single sign-on support and improves the OIDC OAuth workflow
|
||||
The interface now runs on Lobe UI v2. Every component has been reconsidered for clarity and consistency. Buttons, typography, spacing, and color palettes align across the entire application.
|
||||
|
||||
## 📊 Model Library Updates
|
||||
In practice, this means less visual noise and more predictable interactions. The sidebar is cleaner. Model tags are arranged more neatly. System roles can be collapsed to save space. Mobile layouts have been refined for better thumb reachability.
|
||||
|
||||
## Desktop app with native integration
|
||||
|
||||
LobeHub is now available as a native desktop application for Windows and macOS. It includes system tray integration, native window controls, and keyboard shortcuts that feel at home on your operating system.
|
||||
|
||||
The desktop app matters most when you:
|
||||
|
||||
- Want LobeHub available without keeping a browser tab open
|
||||
- Prefer native window management and alt-tabbing
|
||||
- Need system-level shortcuts for common actions
|
||||
- Work across multiple workspaces and want the app docked consistently
|
||||
|
||||
## MCP, search, and SSO improvements
|
||||
|
||||
MCP protocol support has been enhanced with Streamable HTTP server support. Stdio MCP server installation is smoother, and environment variable parameters are now supported for more flexible server configurations.
|
||||
|
||||
Search adds Search1API as a new provider, and SearXNG now supports category and time-range selection for more targeted results.
|
||||
|
||||
SSO integration expands with Keycloak support, and the OIDC OAuth workflow has been refined for fewer friction points during authentication.
|
||||
|
||||
## Model library updates
|
||||
|
||||
Coverage has expanded to keep provider evaluation and migration straightforward:
|
||||
|
||||
- OpenAI: GPT-4.1 series, o3/o4-mini
|
||||
- Google: Gemini 2.5 Pro Experimental, reasoning token usage statistics
|
||||
- Google: Gemini 2.5 Pro Experimental with reasoning token tracking
|
||||
- xAI: Grok 3 series
|
||||
- Latest models from Anthropic, Mistral, Qwen, Ollama, and more
|
||||
- Anthropic, Mistral, Qwen, Ollama: latest model refreshes
|
||||
|
||||
## 💫 Experience Improvements
|
||||
## Improvements and fixes
|
||||
|
||||
- UI improvements: support for collapsing system roles, better mobile styles, neatly arranged model tags, and allow copy/edit on error
|
||||
- Performance stats: token generation throughput, Aliyun Bailian token tracking, and Google Gemini reasoning token statistics
|
||||
- Hotkey enhancements: new shortcuts for clearing chat messages and deleting messages, plus support for custom hotkey settings
|
||||
- Tool calling improvements: update tool-call arguments and re-trigger; local file plugins now support writing files
|
||||
- Web crawling: adds crawler rules for Xiaohongshu
|
||||
- Performance stats: token generation throughput display, Aliyun Bailian token tracking, Google Gemini reasoning token statistics
|
||||
- Hotkey enhancements: shortcuts for clearing chat and deleting messages, plus full custom hotkey configuration
|
||||
- Tool calling: update tool-call arguments and re-trigger; local file plugins now support writing files
|
||||
- Error handling: copy and edit options available when errors occur
|
||||
- Web crawling: added crawler rules for Xiaohongshu
|
||||
|
||||
## Credits
|
||||
|
||||
Huge thanks to these contributors:
|
||||
|
||||
@arvinxx @hezhijie0327 @Innei @nekomeowww @lobehub-team
|
||||
|
|
|
|||
|
|
@ -1,35 +1,60 @@
|
|||
---
|
||||
title: 全新设计风格与桌面端发布 ✨
|
||||
description: LobeHub 正式发布桌面端应用,带来更现代、更流畅的使用体验
|
||||
title: "Lobe UI v2 设计系统与桌面端正式发布"
|
||||
description: LobeHub 推出基于 Lobe UI v2 的全新视觉设计,并正式发布 Windows 与 macOS 桌面端应用。
|
||||
tags:
|
||||
- 桌面端
|
||||
- LobeHub
|
||||
- 思维链
|
||||
- Lobe UI v2
|
||||
---
|
||||
|
||||
# 全新设计风格与桌面端发布 ✨
|
||||
# Lobe UI v2 设计系统与桌面端正式发布
|
||||
|
||||
LobeHub 在四月完成重大视觉升级,推出全新 Lobe UI v2 设计系统,并正式发布桌面端应用,带来更现代、更流畅的使用体验。
|
||||
四月带来了两项看得见、摸得着的重大变化:基于全新 Lobe UI v2 设计系统的完整视觉刷新,以及 LobeHub 桌面端的正式发布。两者结合,让日常使用更现代、更流畅,也更贴合桌面场景。
|
||||
|
||||
## 🌟 重大更新
|
||||
## 焕然一新的视觉基础
|
||||
|
||||
- 🎨 全新设计系统:升级至 Lobe UI v2,带来更现代化的界面设计与交互体验
|
||||
- 💻 桌面端正式发布:支持 Windows、macOS 系统托盘、窗口控制等原生功能,提供更便捷的桌面使用体验
|
||||
- 🔌 MCP 协议增强:支持 Streamable HTTP MCP 服务器,优化 stdio MCP 服务器安装体验,新增环境变量参数支持
|
||||
- 🔍 搜索功能扩展:新增 Search1API 搜索服务商支持,优化 SearXNG 分类与时间范围选择
|
||||
- 🔑 SSO 认证扩展:新增 Keycloak 单点登录支持,改进 OIDC OAuth 工作流
|
||||
界面现已升级到 Lobe UI v2。每个组件都经过重新审视,追求清晰与一致。按钮、排版、间距和色调在整个应用中保持统一。
|
||||
|
||||
## 📊 模型库更新
|
||||
在实际使用中,这意味着更少的视觉干扰和更可预测的交互。侧边栏更清爽,模型标签排列更整齐,系统角色可以折叠以节省空间,移动端布局经过优化以适应拇指操作。
|
||||
|
||||
- OpenAI: GPT-4.1 系列、o3/o4-mini
|
||||
- Google: Gemini 2.5 Pro Experimental、推理 Token 统计支持
|
||||
- xAI: Grok 3 系列模型
|
||||
- Anthropic、Mistral、Qwen、Ollama 等平台最新模型
|
||||
## 具备原生集成的桌面端
|
||||
|
||||
## 💫 体验优化
|
||||
LobeHub 现已作为原生桌面应用支持 Windows 和 macOS。它包含系统托盘集成、原生窗口控制,以及与操作系统协调的键盘快捷键。
|
||||
|
||||
- 界面改进:支持系统角色折叠、优化移动端样式、整齐排列模型标签、错误时允许复制 / 编辑
|
||||
- 性能统计:显示 Token 生成性能、阿里云百炼 Token 使用追踪、Google Gemini 推理 Token 统计
|
||||
- 快捷键增强:新增清除聊天消息、删除消息等快捷键,支持自定义快捷键设置
|
||||
- 工具调用优化:支持更新工具调用参数并重新触发、本地文件插件新增写入文件功能
|
||||
- 网页爬取:新增小红书爬虫规则支持
|
||||
桌面端在以下场景尤为实用:
|
||||
|
||||
- 希望无需保持浏览器标签页打开就能使用 LobeHub
|
||||
- 偏好原生的窗口管理和 Alt-Tab 切换
|
||||
- 需要系统级快捷键执行常用操作
|
||||
- 在多工作区之间切换,希望应用始终固定在 Dock 栏
|
||||
|
||||
## MCP、搜索与 SSO 改进
|
||||
|
||||
MCP 协议支持已增强,新增 Streamable HTTP 服务器支持。Stdio MCP 服务器安装体验更流畅,环境变量参数现已支持,提供更灵活的服务器配置。
|
||||
|
||||
搜索新增 Search1API 作为服务商,SearXNG 现支持分类和时间范围选择,便于更精准的结果筛选。
|
||||
|
||||
SSO 集成扩展至 Keycloak,OIDC OAuth 工作流经过优化,认证过程中的摩擦点更少。
|
||||
|
||||
## 模型库更新
|
||||
|
||||
覆盖范围进一步扩展,帮助团队在不同服务商之间顺畅评估和迁移:
|
||||
|
||||
- OpenAI:GPT-4.1 系列、o3/o4-mini
|
||||
- Google:Gemini 2.5 Pro Experimental,支持推理 Token 追踪
|
||||
- xAI:Grok 3 系列
|
||||
- Anthropic、Mistral、Qwen、Ollama:最新模型刷新
|
||||
|
||||
## 体验优化与修复
|
||||
|
||||
- 性能统计:Token 生成吞吐率显示、阿里云百炼 Token 追踪、Google Gemini 推理 Token 统计
|
||||
- 快捷键增强:清除聊天和删除消息快捷键,支持完整的自定义快捷键配置
|
||||
- 工具调用优化:支持更新工具调用参数并重新触发;本地文件插件新增写入文件功能
|
||||
- 错误处理:出错时支持复制和编辑
|
||||
- 网页爬取:新增小红书爬虫规则
|
||||
|
||||
## Credits
|
||||
|
||||
感谢以下贡献者:
|
||||
|
||||
@arvinxx @hezhijie0327 @Innei @nekomeowww @lobehub-team
|
||||
|
|
|
|||
|
|
@ -1,22 +1,57 @@
|
|||
---
|
||||
title: "Prompt Variables and Claude 4 Reasoning Model Support \U0001F680"
|
||||
description: >-
|
||||
Supports Claude 4 reasoning models and expands search and reasoning
|
||||
capabilities across multiple AI providers.
|
||||
title: "Prompt Variables and Claude 4 Reasoning Model Support"
|
||||
description: LobeHub introduces prompt variables for reusable templates and adds full support for Claude 4 reasoning models with web search integration.
|
||||
tags:
|
||||
- DeepSeek R1
|
||||
- Prompt Variables
|
||||
- Claude Sonnet 4
|
||||
- Claude 4
|
||||
- Reasoning Models
|
||||
---
|
||||
|
||||
# Prompt Variables and Claude 4 Reasoning Model Support 🚀
|
||||
# Prompt Variables and Claude 4 Reasoning Model Support
|
||||
|
||||
From May to June, LobeHub continued to refine core capabilities—introducing a prompt-variables system, adding support for Claude 4 reasoning models, and expanding search and crawling across multiple AI providers.
|
||||
From May through June, LobeHub focused on making repeated workflows easier to run. Prompt variables let you reuse prompt structures with different inputs. Claude 4 support brings the latest reasoning capabilities with integrated web search. Expanded crawling and file handling round out the update.
|
||||
|
||||
## 🌟 Key Updates
|
||||
## Reusable prompts with variables
|
||||
|
||||
- 💬 Prompt variables: use placeholder variables in prompts and the input box for dynamic content replacement
|
||||
- 🧠 Claude 4 support: full integration of Anthropic Claude 4 reasoning models, including the Web Search tool and Beta Header
|
||||
- 🔍 Search expansion: adds the ModelScope provider with broader search and crawling support
|
||||
- 📄 File upload improvements: upload files directly into chat context, with improved PDF and XLSX parsing
|
||||
- 🔐 Page protection: supports protected page access and improves Clerk middleware route protection
|
||||
You can now use placeholder variables in prompts and the input box. The same prompt structure can be reused with different dynamic values, reducing manual editing when you run similar tasks repeatedly.
|
||||
|
||||
In practice, this works well for:
|
||||
|
||||
- Templates that need a name, date, or topic inserted each time
|
||||
- Structured analysis where only the source material changes
|
||||
- Repeated workflows with consistent formatting but variable content
|
||||
|
||||
Variables are defined with simple syntax and filled in at runtime, so you spend less time retyping and more time getting results.
|
||||
|
||||
## Claude 4 with web search and beta features
|
||||
|
||||
Anthropic Claude 4 reasoning models are now fully integrated. The implementation includes Web Search tool support and Beta Header compatibility, so Claude 4 can use the same reasoning and retrieval capabilities that power other advanced workflows.
|
||||
|
||||
This means you can:
|
||||
|
||||
- Run complex reasoning tasks with transparent chain-of-thought
|
||||
- Combine reasoning with live web retrieval in a single conversation
|
||||
- Access the latest Claude capabilities as they become available
|
||||
|
||||
## Broader provider and content support
|
||||
|
||||
ModelScope joins the provider lineup, expanding the range of models available without changing how you work. Search and crawling support has been extended across more platforms for better content coverage.
|
||||
|
||||
File uploads now work directly in chat context, with improved parsing for PDF and XLSX files. This makes it easier to reference documents and spreadsheets within ongoing conversations.
|
||||
|
||||
## Access control and security improvements
|
||||
|
||||
Protected page access has been added for sensitive workflows. Clerk middleware route protection has been improved to handle authentication edge cases more reliably.
|
||||
|
||||
## Improvements and fixes
|
||||
|
||||
- Enhanced PDF and XLSX content extraction accuracy
|
||||
- Improved file upload handling in chat context
|
||||
- Better route protection for authenticated pages
|
||||
- Provider-specific optimizations for Claude 4 token handling
|
||||
|
||||
## Credits
|
||||
|
||||
Huge thanks to these contributors:
|
||||
|
||||
@arvinxx @hezhijie0327 @lobehub-team
|
||||
|
|
|
|||
|
|
@ -1,20 +1,57 @@
|
|||
---
|
||||
title: "提示词变量与 Claude 4 推理模型支持 \U0001F680"
|
||||
description: 支持 Claude 4 推理模型,并扩展多个 AI 服务商的搜索与推理能力
|
||||
title: "提示词变量与 Claude 4 推理模型支持"
|
||||
description: LobeHub 引入提示词变量实现模板复用,并完整支持 Claude 4 推理模型及网页搜索集成。
|
||||
tags:
|
||||
- DeepSeek R1
|
||||
- 提示词变量
|
||||
- Claude Sonnet 4
|
||||
- Claude 4
|
||||
- 推理模型
|
||||
---
|
||||
|
||||
# 提示词变量与 Claude 4 推理模型支持 🚀
|
||||
# 提示词变量与 Claude 4 推理模型支持
|
||||
|
||||
LobeHub 在五月至六月持续优化核心功能,新增提示词变量系统、支持 Claude 4 推理模型,并扩展多个 AI 服务商的搜索与推理能力。
|
||||
五月到六月的迭代聚焦于让重复性工作流更易执行。提示词变量让你可以用不同输入复用提示词结构。Claude 4 支持带来最新的推理能力并集成网页搜索。扩展的爬虫和文件处理能力进一步完善了这次更新。
|
||||
|
||||
## 🌟 主要更新
|
||||
## 带变量的可复用提示词
|
||||
|
||||
- 💬 提示词变量系统:支持在提示词和输入框中使用占位符变量,实现动态内容替换
|
||||
- 🧠 Claude 4 系列支持:完整接入 Anthropic Claude 4 推理模型,支持 Web Search 工具与 Beta Header
|
||||
- 🔍 搜索能力扩展:新增 ModelScope 服务商,支持更多平台的搜索与爬虫功能
|
||||
- 📄 文件上传优化:支持直接将文件上传至聊天上下文,改进 PDF、XLSX 文件内容解析
|
||||
- 🔐 页面保护功能:支持页面访问保护,优化 Clerk 中间件路由保护
|
||||
现在可以在提示词和输入框中使用占位符变量。同一套提示词结构可以搭配不同的动态值重复使用,在执行相似任务时减少手动编辑。
|
||||
|
||||
这在以下场景表现良好:
|
||||
|
||||
- 每次需要插入姓名、日期或主题的模板
|
||||
- 仅源材料变化但格式固定的结构化分析
|
||||
- 格式一致但内容可变的高频工作流
|
||||
|
||||
变量使用简单语法定义,在运行时填充,让你花更少时间重复输入,更快获得结果。
|
||||
|
||||
## 支持网页搜索与 Beta 功能的 Claude 4
|
||||
|
||||
Anthropic Claude 4 推理模型现已完整集成。实现包含 Web Search 工具支持和 Beta Header 兼容性,让 Claude 4 可以使用与其他高级工作流相同的推理和检索能力。
|
||||
|
||||
这意味着你可以:
|
||||
|
||||
- 执行具有透明思维链的复杂推理任务
|
||||
- 在单次对话中结合推理与实时网页检索
|
||||
- 在最新 Claude 能力发布时立即使用
|
||||
|
||||
## 更广泛的服务商与内容支持
|
||||
|
||||
ModelScope 加入服务商阵容,扩展了可用模型的范围,且不改变你的工作方式。搜索和爬虫支持已扩展到更多平台,获得更好的内容覆盖。
|
||||
|
||||
文件上传现在可以直接在聊天上下文中进行,PDF 和 XLSX 文件的解析也得到改进。这让在持续对话中引用文档和电子表格变得更加容易。
|
||||
|
||||
## 访问控制与安全改进
|
||||
|
||||
敏感工作流现已支持受保护的页面访问。Clerk 中间件路由保护已优化,更可靠地处理认证边界情况。
|
||||
|
||||
## 体验优化与修复
|
||||
|
||||
- 提升 PDF 和 XLSX 内容提取准确性
|
||||
- 改进聊天上下文中的文件上传处理
|
||||
- 优化认证页面的路由保护
|
||||
- 针对 Claude 4 Token 处理的服务商特定优化
|
||||
|
||||
## Credits
|
||||
|
||||
感谢以下贡献者:
|
||||
|
||||
@arvinxx @hezhijie0327 @lobehub-team
|
||||
|
|
|
|||
|
|
@ -1,9 +1,7 @@
|
|||
---
|
||||
title: "MCP Marketplace and Search Provider Expansion \U0001F50D"
|
||||
description: >-
|
||||
Adds support for multiple search providers, integrates Amazon Cognito and
|
||||
Google SSO, and continues improving user experience and the developer
|
||||
ecosystem.
|
||||
MCP Marketplace is now live with one-click plugin installation, alongside expanded search providers and new SSO options for easier team access.
|
||||
tags:
|
||||
- MCP Marketplace
|
||||
- Best MCP
|
||||
|
|
@ -12,20 +10,31 @@ tags:
|
|||
|
||||
# MCP Marketplace and Search Provider Expansion 🔍
|
||||
|
||||
From June to July, LobeHub launched the MCP plugin marketplace, added support for multiple search providers, and integrated Amazon Cognito and Google SSO—continuing to improve both user experience and the developer ecosystem.
|
||||
From June through July, LobeHub focused on two things: making tools easier to discover and making search more flexible. The new MCP Marketplace brings one-click plugin installation to desktop, while expanded search providers and authentication options give teams more choice in how they work.
|
||||
|
||||
## 🌟 Major Updates
|
||||
## MCP Marketplace: discover and install tools faster
|
||||
|
||||
- 🛒 MCP Marketplace launch: one-click MCP plugin installation on desktop, with a richer ecosystem and smoother install experience
|
||||
- 🔍 Search provider expansion: adds built-in providers like Brave, Google PSE, and Kagi; supports Vertex AI Google Search Grounding
|
||||
- 🔐 Auth system enhancements: integrates Amazon Cognito and Google SSO as authentication providers, with page access protection
|
||||
- 🤖 v0 (Vercel) support: adds the Vercel v0 provider
|
||||
- 📊 Analytics framework: introduces event tracking for analytics to improve user behavior insights
|
||||
Finding and installing MCP plugins used to require manual setup. Now the MCP Marketplace brings one-click installation directly to the desktop app. Browse available tools, see what they do, and add them to your workspace without leaving the interface.
|
||||
|
||||
## 💫 Experience Improvements
|
||||
This makes the plugin ecosystem more accessible—especially for teams who want to extend their Agents without diving into configuration files or command-line steps.
|
||||
|
||||
- UI improvements: better mobile model selector layout, improved text overflow handling, and fixes for loading animation switching
|
||||
- Reasoning configuration: improves Gemini thinkingBudget configuration and correctly handles the reasoning\_effort parameter
|
||||
- Search optimizations: supports Browserless blockAds and stealth parameters; fixes Mermaid display issues in Firefox
|
||||
- Desktop enhancements: improved multi-monitor window opening behavior, theme fixes, and optimized chunked loading
|
||||
- Response animations: improved merge logic for response animations and adds a transition animation toggle
|
||||
## More ways to search and authenticate
|
||||
|
||||
Search coverage now includes Brave, Google PSE, and Kagi as built-in options, plus Vertex AI Google Search Grounding for grounded generation workflows. You can choose the provider that fits your accuracy, speed, and privacy needs.
|
||||
|
||||
On the authentication side, Amazon Cognito and Google SSO are now available as identity providers. Teams already using these systems can connect LobeHub without adding another set of credentials. Page access protection is also included, so you can control who sees what based on their sign-in status.
|
||||
|
||||
## Runtime improvements
|
||||
|
||||
This cycle also brings v0 (Vercel) provider support and an analytics event-tracking framework for better visibility into usage patterns.
|
||||
|
||||
## Improvements and fixes
|
||||
|
||||
- Refined mobile model selector layout and text overflow handling
|
||||
- Improved loading animation transitions
|
||||
- Optimized Gemini thinkingBudget configuration and reasoning\_effort parameter handling
|
||||
- Added Browserless blockAds and stealth parameters for cleaner search results
|
||||
- Fixed Mermaid rendering issues in Firefox
|
||||
- Improved multi-monitor window opening behavior on desktop
|
||||
- Fixed theme-related edge cases and optimized chunked loading
|
||||
- Added transition animation toggle for response rendering
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: "MCP 市场与搜索服务商扩展 \U0001F50D"
|
||||
description: 新增多个搜索服务商支持,并集成 Amazon Cognito 与 Google SSO 认证,持续优化用户体验与开发者生态
|
||||
description: MCP 市场正式上线,支持一键安装插件;同时扩展搜索服务商覆盖,新增 Amazon Cognito 和 Google SSO 认证,让团队接入更便捷。
|
||||
tags:
|
||||
- MCP 市场
|
||||
- Best MCP
|
||||
|
|
@ -9,20 +9,31 @@ tags:
|
|||
|
||||
# MCP 市场与搜索服务商扩展 🔍
|
||||
|
||||
LobeHub 在六月至七月推出 MCP 插件市场,新增多个搜索服务商支持,并集成 Amazon Cognito 与 Google SSO 认证,持续优化用户体验与开发者生态。
|
||||
六月到七月,LobeHub 重点做了两件事:让工具更易发现,让搜索更灵活。全新的 MCP 市场为桌面端带来一键安装插件能力;扩展的搜索服务商和认证选项则为团队提供了更多工作方式的选择。
|
||||
|
||||
## 🌟 重大更新
|
||||
## MCP 市场:更快发现和安装工具
|
||||
|
||||
- 🛒 MCP 市场上线:桌面端支持 MCP 插件一键安装,提供丰富的插件生态与便捷的安装体验
|
||||
- 🔍 搜索服务商扩展:新增 Brave、Google PSE、Kagi 等内置搜索服务商,支持 Vertex AI Google Search Grounding
|
||||
- 🔐 认证系统增强:集成 Amazon Cognito 与 Google SSO 作为认证提供商,支持页面访问保护
|
||||
- 🤖 v0 (Vercel) 支持:新增 Vercel v0 服务商支持
|
||||
- 📊 数据分析框架:实现数据分析事件追踪框架,优化用户行为分析
|
||||
以往安装 MCP 插件需要手动配置。现在 MCP 市场直接在桌面端提供一键安装。浏览可用工具、了解功能,无需离开界面即可添加到工作区。
|
||||
|
||||
## 💫 体验优化
|
||||
这让插件生态更易触达,特别适合希望扩展 Agent 能力但又不想折腾配置文件或命令行的团队。
|
||||
|
||||
- 界面改进:优化移动端模型选择布局、改进文本溢出处理、修复加载动画切换问题
|
||||
- 推理配置:优化 Gemini thinkingBudget 配置、正确处理 reasoning\_effort 参数
|
||||
- 搜索优化:支持 Browserless blockAds 与 stealth 参数、修复 Firefox Mermaid 显示错误
|
||||
- 桌面端增强:改进多显示器窗口打开体验、修复主题问题、优化分块加载
|
||||
- 响应动画:改进响应动画合并逻辑、支持过渡动画开关
|
||||
## 更多搜索与认证选择
|
||||
|
||||
搜索覆盖新增 Brave、Google PSE、Kagi 等内置选项,同时支持 Vertex AI Google Search Grounding 以满足检索增强生成场景。你可以按准确度、速度和隐私需求选择合适的服务商。
|
||||
|
||||
认证方面新增 Amazon Cognito 和 Google SSO 支持。已使用这些系统的团队无需额外凭证即可接入 LobeHub。同时提供页面访问保护,可按登录状态控制内容可见范围。
|
||||
|
||||
## 运行时改进
|
||||
|
||||
本周期还新增 v0 (Vercel) 服务商支持,以及事件追踪分析框架,便于洞察使用模式。
|
||||
|
||||
## 体验优化与修复
|
||||
|
||||
- 优化移动端模型选择器布局与文本溢出处理
|
||||
- 改进加载动画切换效果
|
||||
- 优化 Gemini thinkingBudget 配置与 reasoning\_effort 参数处理
|
||||
- 新增 Browserless blockAds 与 stealth 参数,搜索结果更干净
|
||||
- 修复 Firefox 中 Mermaid 渲染问题
|
||||
- 改进桌面端多显示器窗口打开体验
|
||||
- 修复主题相关边界情况并优化分块加载
|
||||
- 新增响应渲染过渡动画开关
|
||||
|
|
|
|||
|
|
@ -1,30 +1,44 @@
|
|||
---
|
||||
title: "AI Image Generation and Desktop Enhancements \U0001F3A8"
|
||||
title: "Image Generation, Desktop, and Auth Updates \U0001F3A8"
|
||||
description: >-
|
||||
Introduces AI image generation with multiple providers and continues improving
|
||||
the desktop experience and authentication system.
|
||||
Generate AI images across multiple providers, connect with expanded identity options, and run desktop workflows with fewer interruptions.
|
||||
tags:
|
||||
- AI Image Generation
|
||||
- Image Generation
|
||||
- Desktop App
|
||||
- Authentication
|
||||
- API Key
|
||||
---
|
||||
|
||||
# AI Image Generation and Desktop Enhancements 🎨
|
||||
# Image Generation, Desktop, and Auth Updates 🎨
|
||||
|
||||
From July to August, LobeHub introduced AI image generation, added support for multiple providers, and continued improving the desktop experience and authentication system.
|
||||
From July through August, LobeHub shipped AI image generation, expanded identity and access options, and kept refining the desktop experience. The common thread is practical capability you can use immediately—more ways to create visuals, easier connections to existing systems, and smoother day-to-day workflows on desktop.
|
||||
|
||||
## 🌟 Major Updates
|
||||
## Generate images your way
|
||||
|
||||
- 🎨 AI image generation: generate images via providers like Google Imagen, Qwen, Zhipu CogView4, and MiniMax
|
||||
- 🔐 Auth expansion: adds Amazon Cognito, Google SSO, and Okta authentication support
|
||||
- 🖥️ Desktop optimizations: network proxy configuration, custom hotkeys, OAuth refactor, and remote chat support
|
||||
- 🔌 MCP auth enhancements: supports authentication for Streamable HTTP MCP Servers
|
||||
- 🔑 API Key management: full-featured API Key management
|
||||
LobeHub now supports AI image generation through Google Imagen, Qwen, Zhipu CogView4, and MiniMax. Pick the provider that matches your quality, speed, and budget preferences without changing your overall workflow.
|
||||
|
||||
## 📊 Model Library Updates
|
||||
Use this to:
|
||||
|
||||
Adds support for Claude Opus 4.1, Grok-4, Kimi K2, and Ollama gpt-oss; updates Gemini 2.5 Flash-Lite GA, Hunyuan A13B thinking, Doubao reasoning models, and more.
|
||||
- Generate visuals directly in chat without switching tools
|
||||
- Compare outputs across providers for the same prompt
|
||||
- Route image requests based on your current project needs
|
||||
|
||||
## 💫 Experience Improvements
|
||||
## Connect to existing identity systems
|
||||
|
||||
Adds desktop notifications, improves the settings window layout, and enhances multi-monitor experience; improves MCP plugin invocation and rendering, and fixes Gemini Artifacts line-break issues.
|
||||
Authentication now extends to Amazon Cognito, Google SSO, and Okta. If your organization already runs on these systems, connecting LobeHub is now a configuration step rather than a migration project.
|
||||
|
||||
This release also adds authentication support for Streamable HTTP MCP servers, and the API Key management interface is now fully functional for creating, managing, and rotating keys.
|
||||
|
||||
## Desktop and model updates
|
||||
|
||||
Desktop improvements include network proxy configuration, custom hotkey support, OAuth flow refinements, and remote chat capabilities. These changes make the desktop app more adaptable across different network environments and team setups.
|
||||
|
||||
The model library expands with Claude Opus 4.1, Grok-4, Kimi K2, and Ollama gpt-oss, plus updates to Gemini 2.5 Flash-Lite GA, Hunyuan A13B thinking, and Doubao reasoning models.
|
||||
|
||||
## Improvements and fixes
|
||||
|
||||
- Added desktop notifications
|
||||
- Refined settings window layout
|
||||
- Improved multi-monitor behavior
|
||||
- Enhanced MCP plugin invocation and rendering
|
||||
- Fixed Gemini Artifacts line-break issues
|
||||
|
|
|
|||
|
|
@ -1,28 +1,43 @@
|
|||
---
|
||||
title: 全新设计风格与桌面端发布 ✨
|
||||
description: LobeHub v1.49.12 已完整支持 DeepSeek R1 模型,为用户带来前所未有的思维链交互体验
|
||||
title: 图像生成、桌面端与认证更新 🎨
|
||||
description: 通过多个服务商生成 AI 图像,用更多身份系统完成接入,并在桌面端享受更顺畅的工作流。
|
||||
tags:
|
||||
- DeepSeek R1
|
||||
- CoT
|
||||
- 思维链
|
||||
- 图像生成
|
||||
- 桌面应用
|
||||
- 认证
|
||||
- API Key
|
||||
---
|
||||
|
||||
# AI 图像生成与桌面端增强 🎨
|
||||
# 图像生成、桌面端与认证更新 🎨
|
||||
|
||||
LobeHub 在七月至八月推出 AI 图像生成功能,新增多个服务商支持,并持续优化桌面端体验与认证系统。
|
||||
七月到八月,LobeHub 上线了 AI 图像生成功能,扩展了身份与访问能力,并持续打磨桌面端体验。这次更新的主线是立即可用的实用能力 —— 更多视觉创作方式、更易接入现有系统、更顺畅的桌面日常操作。
|
||||
|
||||
## 🌟 重大更新
|
||||
## 按你的方式生成图像
|
||||
|
||||
- 🎨 AI 图像生成:支持通过 Google Imagen、Qwen、Zhipu CogView4、MiniMax 等服务商生成图像
|
||||
- 🔐 认证系统扩展:新增 Amazon Cognito、Google SSO、Okta 认证支持
|
||||
- 🖥️ 桌面端优化:支持网络代理配置、自定义快捷键、OAuth 重构与远程聊天支持
|
||||
- 🔌 MCP 认证增强:支持 Streamable HTTP MCP Server 认证
|
||||
- 🔑 API Key 管理:实现完整的 API Key 管理功能
|
||||
LobeHub 现已支持通过 Google Imagen、Qwen、智谱 CogView4、MiniMax 等服务商进行 AI 图像生成。你可以按质量、速度和成本偏好选择服务商,无需改变整体工作流。
|
||||
|
||||
## 📊 模型库更新
|
||||
可用于:
|
||||
|
||||
新增 Claude Opus 4.1、Grok-4、Kimi K2、Ollama gpt-oss 支持,更新 Gemini 2.5 Flash-Lite GA、Hunyuan A13B thinking、Doubao 思维模型等
|
||||
- 直接在对话中生成视觉素材,无需切换工具
|
||||
- 对比不同服务商对同一提示词的输出效果
|
||||
- 根据当前项目需求灵活路由图像请求
|
||||
|
||||
## 💫 体验优化
|
||||
## 接入现有身份体系
|
||||
|
||||
桌面端新增通知功能、优化设置窗口布局、改进多显示器体验;优化 MCP 插件调用与显示、修复 Gemini Artifacts 换行问题
|
||||
认证能力扩展至 Amazon Cognito、Google SSO 和 Okta。如果你的组织已在运行这些系统,接入 LobeHub 现在只是配置步骤而非迁移项目。
|
||||
|
||||
本次更新还为 Streamable HTTP MCP Server 添加认证支持,API Key 管理界面也已完整支持创建、管理和轮换密钥。
|
||||
|
||||
## 桌面端与模型更新
|
||||
|
||||
桌面端改进包括网络代理配置、自定义快捷键、OAuth 流程优化和远程聊天能力。这些改动让桌面应用在不同网络环境和团队配置下更具适应性。
|
||||
|
||||
模型库新增 Claude Opus 4.1、Grok-4、Kimi K2、Ollama gpt-oss,同时更新 Gemini 2.5 Flash-Lite GA、Hunyuan A13B 思维模型、Doubao 推理模型等。
|
||||
|
||||
## 体验优化与修复
|
||||
|
||||
- 新增桌面端通知能力
|
||||
- 优化设置窗口布局
|
||||
- 改进多显示器体验
|
||||
- 增强 MCP 插件调用与渲染
|
||||
- 修复 Gemini Artifacts 换行问题
|
||||
|
|
|
|||
|
|
@ -1,8 +1,7 @@
|
|||
---
|
||||
title: "Gemini Image Generation and Non-Streaming Mode Support \U0001F3A8"
|
||||
description: >-
|
||||
Adds Gemini 2.5 Flash Image generation, non-streaming response mode, and
|
||||
expands provider and model support.
|
||||
Gemini 2.5 Flash Image generation, non-streaming response mode, and expanded model coverage give you more flexibility in how you generate and receive content.
|
||||
tags:
|
||||
- Gemini
|
||||
- Nano Banana
|
||||
|
|
@ -11,20 +10,27 @@ tags:
|
|||
|
||||
# Gemini Image Generation and Non-Streaming Mode Support 🎨
|
||||
|
||||
From August to September, LobeHub added Gemini 2.5 Flash Image generation, introduced non-streaming response mode support, and expanded provider and model support across the ecosystem.
|
||||
From August through September, LobeHub expanded Gemini-related capabilities across image generation, response behavior, and model coverage. The goal was straightforward: make newer Gemini workflows easier to run in production while keeping your provider options open.
|
||||
|
||||
## 🌟 Major Updates
|
||||
## More ways to generate images
|
||||
|
||||
- 🎨 Gemini image generation: supports Gemini 2.5 Flash Image (Nano Banana), Imagen 4 GA, and other image-generation models
|
||||
- 🔄 Non-streaming mode: adds non-streaming response mode support for more usage scenarios
|
||||
- 🌐 Provider expansion: adds image-generation providers like Nebius, AkashChat, and BFL
|
||||
- 🖼️ Azure OpenAI image generation: generate images via Azure OpenAI
|
||||
- 🔧 HTML preview: supports previewing HTML content
|
||||
This release adds support for Gemini 2.5 Flash Image (Nano Banana) and Imagen 4 GA, alongside coverage for Nebius, AkashChat, and BFL. Azure OpenAI can now also handle image generation requests.
|
||||
|
||||
## 📊 Model Library Updates
|
||||
The new non-streaming response mode is useful when you need complete responses before displaying them—think batched processing, structured outputs, or integrations that expect synchronous results.
|
||||
|
||||
Adds GPT-5 series, Claude Opus 4.1, Grok Code Fast 1, DeepSeek V3.1, and Gemini URL Context Tool support.
|
||||
Use these updates to:
|
||||
|
||||
## 💫 Experience Improvements
|
||||
- Generate images through Gemini models directly in chat
|
||||
- Switch between streaming and complete response modes based on your use case
|
||||
- Route image requests to newer providers as they become available
|
||||
|
||||
Improves the reasoning scroll mask effect, adds hotkeys for switching sessions, improves mobile form controls, and enhances Gemini error messaging.
|
||||
## Model library additions
|
||||
|
||||
The model library now includes GPT-5 series, Claude Opus 4.1, Grok Code Fast 1, DeepSeek V3.1, and the Gemini URL Context Tool for grounded retrieval.
|
||||
|
||||
## Improvements and fixes
|
||||
|
||||
- Improved reasoning scroll mask effect
|
||||
- Added hotkeys for switching sessions
|
||||
- Enhanced mobile form controls
|
||||
- Refined Gemini error messaging
|
||||
|
|
|
|||
|
|
@ -1,24 +1,35 @@
|
|||
---
|
||||
title: "Gemini 图像生成与非流式模式支持 \U0001F3A8"
|
||||
description: LobeHub v1.49.12 已完整支持 DeepSeek R1 模型,为用户带来前所未有的思维链交互体验
|
||||
description: Gemini 2.5 Flash Image 图像生成、非流式响应模式,以及扩展的模型覆盖,让内容生成和接收方式更灵活。
|
||||
tags:
|
||||
- Gemini
|
||||
- Nano banana
|
||||
- Nano Banana
|
||||
- AI 生图
|
||||
---
|
||||
|
||||
# Gemini 图像生成与非流式模式支持 🎨
|
||||
|
||||
LobeHub 在八月至九月新增 Gemini 2.5 Flash Image 图像生成能力,支持非流式响应模式,并扩展多个 AI 服务商与模型支持。
|
||||
八月到九月,LobeHub 围绕 Gemini 补齐了图像生成、响应模式和模型覆盖。目标很直接:让新能力在真实使用场景更易接入,同时保持服务商选择的开放性。
|
||||
|
||||
## 🌟 重大更新
|
||||
## 更多图像生成方式
|
||||
|
||||
- 🎨 Gemini 图像生成:支持 Gemini 2.5 Flash Image(Nano Banana)、Imagen 4 GA 等图像生成模型
|
||||
- 🔄 非流式模式:新增非流式响应模式支持,适配更多使用场景
|
||||
- 🌐 服务商扩展:新增 Nebius、AkashChat、BFL 等图像生成服务商支持
|
||||
- 🖼️ Azure OpenAI 图像生成:支持通过 Azure OpenAI 生成图像
|
||||
- 🔧 HTML 预览:支持 HTML 内容预览功能 📊 模型库更新新增 GPT-5 系列、Claude Opus 4.1、Grok Code Fast 1、DeepSeek V3.1、Gemini URL Context Tool 支持
|
||||
本次新增 Gemini 2.5 Flash Image(Nano Banana)、Imagen 4 GA 支持,同时扩展 Nebius、AkashChat、BFL 等服务商。Azure OpenAI 也可处理图像生成请求。
|
||||
|
||||
## 💫 体验优化
|
||||
全新的非流式响应模式适合需要完整响应后再展示的场景 —— 比如批处理、结构化输出,或期望同步结果的集成场景。
|
||||
|
||||
优化思维滚动遮罩效果、支持会话切换快捷键、改进移动端控件表单显示、优化 Gemini 错误提示
|
||||
可用于:
|
||||
|
||||
- 直接在对话中通过 Gemini 模型生成图像
|
||||
- 按使用场景在流式与完整响应模式间切换
|
||||
- 将图像请求路由到新上线的服务商
|
||||
|
||||
## 模型库更新
|
||||
|
||||
模型库新增 GPT-5 系列、Claude Opus 4.1、Grok Code Fast 1、DeepSeek V3.1,以及用于检索增强的 Gemini URL Context Tool。
|
||||
|
||||
## 体验优化与修复
|
||||
|
||||
- 优化推理滚动遮罩效果
|
||||
- 新增会话切换快捷键
|
||||
- 改进移动端表单控件
|
||||
- 完善 Gemini 错误提示信息
|
||||
|
|
|
|||
|
|
@ -1,8 +1,7 @@
|
|||
---
|
||||
title: "Claude Sonnet 4.5 and Built-in Python Plugin \U0001F40D"
|
||||
description: >-
|
||||
LobeHub v1.49.12 fully supports the DeepSeek R1 model, bringing an
|
||||
unprecedented chain-of-thought experience.
|
||||
Run Python directly in chat with the new built-in plugin, navigate long conversations faster, and work with Claude Sonnet 4.5 and other new models.
|
||||
tags:
|
||||
- Claude Sonnet 4.5
|
||||
- Chain of Thought
|
||||
|
|
@ -11,21 +10,31 @@ tags:
|
|||
|
||||
# Claude Sonnet 4.5 and Built-in Python Plugin 🐍
|
||||
|
||||
From September to October, LobeHub added support for Claude Sonnet 4.5, introduced a built-in Python plugin, and improved chat list navigation and rich text editing.
|
||||
From September through October, LobeHub added a built-in Python execution environment, integrated Claude Sonnet 4.5, and improved how you navigate and edit content in long conversations.
|
||||
|
||||
## 🌟 Key Updates
|
||||
## Run Python code directly in chat
|
||||
|
||||
- 🐍 Built-in Python plugin: run Python code directly in chat
|
||||
- 🤖 Claude Sonnet 4.5: integrates Anthropic’s latest reasoning model
|
||||
- 🗺️ Chat list minimap: quick navigation to improve browsing efficiency in long conversations
|
||||
- 📝 Rich text editor: supports math formulas, task lists, and parallel sending
|
||||
- 🎨 Qwen image editing: edit images via Qwen models
|
||||
- 🌐 Vercel AI Gateway: adds the Vercel AI Gateway provider
|
||||
The new built-in Python plugin lets you execute Python code without leaving the conversation. Run calculations, transform data, or test snippets and see results immediately alongside your discussion.
|
||||
|
||||
## 📊 Model Library Updates
|
||||
This keeps your workflow in one place—no need to switch to a separate notebook or terminal when you need to verify something quickly.
|
||||
|
||||
Adds providers such as Seedream 4.0, CometAPI, and NewAPI, and updates Gemini 2.5 video understanding capabilities.
|
||||
## Navigate long conversations more easily
|
||||
|
||||
## 💫 Experience Improvements
|
||||
The new chat list minimap gives you a condensed view of long conversations, making it easier to jump to specific sections without endless scrolling.
|
||||
|
||||
Adds resizable chat input, improves mobile title display, supports Base64 image syntax, and improves `.doc` parsing.
|
||||
## Richer editing and content handling
|
||||
|
||||
The rich text editor now supports math formulas, task lists, and parallel sending. This makes it easier to draft structured messages with mixed content types.
|
||||
|
||||
Qwen image editing is also available—edit images through Qwen models directly in the chat interface.
|
||||
|
||||
## Model and provider updates
|
||||
|
||||
Claude Sonnet 4.5 brings Anthropic's latest reasoning capabilities to LobeHub. The release also adds Vercel AI Gateway as a provider, alongside Seedream 4.0, CometAPI, and NewAPI. Gemini 2.5 video understanding capabilities are updated.
|
||||
|
||||
## Improvements and fixes
|
||||
|
||||
- Chat input is now resizable for comfortable drafting
|
||||
- Mobile title display is improved
|
||||
- Base64 image syntax is now supported
|
||||
- `.doc` file parsing is more reliable
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: "Claude Sonnet 4.5 与内置 Python 插件 \U0001F40D"
|
||||
description: LobeHub v1.49.12 已完整支持 DeepSeek R1 模型,为用户带来前所未有的思维链交互体验
|
||||
description: 用全新内置插件直接在对话中运行 Python,更快地浏览长会话,并接入 Claude Sonnet 4.5 等新模型。
|
||||
tags:
|
||||
- Claude Sonnet 4.5
|
||||
- 思维链
|
||||
|
|
@ -9,21 +9,31 @@ tags:
|
|||
|
||||
# Claude Sonnet 4.5 与内置 Python 插件 🐍
|
||||
|
||||
LobeHub 在九月至十月新增 Claude Sonnet 4.5 模型支持,推出内置 Python 插件,并优化聊天列表导航与富文本编辑体验。
|
||||
九月到十月,LobeHub 新增了内置 Python 执行环境,接入了 Claude Sonnet 4.5,并优化了长会话中的导航与内容编辑体验。
|
||||
|
||||
## 🌟 主要更新
|
||||
## 直接在对话中运行 Python
|
||||
|
||||
- 🐍 内置 Python 插件:支持直接在聊天中执行 Python 代码
|
||||
- 🤖 Claude Sonnet 4.5:接入 Anthropic 最新推理模型
|
||||
- 🗺️ 聊天列表小地图:新增快速导航功能,提升长对话浏览效率
|
||||
- 📝 富文本编辑器:支持数学公式、任务列表、并行发送等功能
|
||||
- 🎨 Qwen 图像编辑:支持通过 Qwen 模型进行图像编辑
|
||||
- 🌐 Vercel AI Gateway:新增 Vercel AI Gateway 服务商支持
|
||||
全新内置 Python 插件让你无需离开对话即可执行代码。进行计算、转换数据或测试代码片段,结果立即在讨论旁展示。
|
||||
|
||||
## 📊 模型库更新
|
||||
这让工作流保持在一处 —— 需要快速验证时,无需切换到独立的笔记本或终端。
|
||||
|
||||
新增 Seedream 4.0、CometAPI、NewAPI 等服务商,更新 Gemini 2.5 视频理解能力
|
||||
## 更轻松地浏览长会话
|
||||
|
||||
## 💫 体验优化
|
||||
新增的聊天列表小地图为长会话提供浓缩视图,让你无需无尽滚动即可跳转到特定段落。
|
||||
|
||||
优化聊天输入框支持调整大小、改进移动端标题显示、支持 Base64 图像语法、优化 .doc 文件解析
|
||||
## 更丰富的编辑与内容处理
|
||||
|
||||
富文本编辑器现支持数学公式、任务列表和并行发送。这让混合内容类型的结构化消息起草更轻松。
|
||||
|
||||
Qwen 图像编辑也已上线 —— 可直接在聊天界面通过 Qwen 模型编辑图像。
|
||||
|
||||
## 模型与服务商更新
|
||||
|
||||
Claude Sonnet 4.5 将 Anthropic 最新的推理能力引入 LobeHub。本次还新增 Vercel AI Gateway 作为服务商,以及 Seedream 4.0、CometAPI、NewAPI 等。Gemini 2.5 视频理解能力同步更新。
|
||||
|
||||
## 体验优化与修复
|
||||
|
||||
- 聊天输入框现支持调整大小,便于舒适起草
|
||||
- 改进移动端标题显示
|
||||
- 支持 Base64 图像语法
|
||||
- `.doc` 文件解析更稳定
|
||||
|
|
|
|||
|
|
@ -1,9 +1,7 @@
|
|||
---
|
||||
title: ComfyUI Integration and Knowledge Base Improvements ⭐
|
||||
description: >-
|
||||
Integrates ComfyUI workflows, adds support for multiple AI providers and
|
||||
models, and continues improving the knowledge base and overall user
|
||||
experience.
|
||||
Run ComfyUI visual workflows directly in LobeHub, organize knowledge with waterfall layouts and auto-extraction, and share outputs as PDF.
|
||||
tags:
|
||||
- AI Knowledge Base
|
||||
- Workflow
|
||||
|
|
@ -12,20 +10,26 @@ tags:
|
|||
|
||||
# ComfyUI Integration and Knowledge Base Improvements ⭐
|
||||
|
||||
From October to November, LobeHub integrated ComfyUI workflows, added support for multiple AI providers and models, and continued to improve the knowledge base and overall user experience.
|
||||
From October through November, LobeHub focused on making workflow-driven creation more practical. This release brings ComfyUI visual pipelines into your conversations, improves how teams organize knowledge at scale, and makes sharing outputs easier across formats.
|
||||
|
||||
## 🌟 Key Updates
|
||||
## Visual workflows and document export
|
||||
|
||||
- 🎨 ComfyUI integration: supports integrating ComfyUI workflows
|
||||
- 🤖 New providers: adds support for providers like Cerebras and CometAPI
|
||||
- 📄 PDF export: export conversations as PDF
|
||||
- 🗂️ Knowledge base improvements: adds a waterfall layout and automatically unzips files on upload
|
||||
- 🖼️ Image generation expansion: supports SiliconFlow and Hunyuan Text-to-Image 3 image generation services
|
||||
ComfyUI workflows are now part of LobeHub, letting you run existing visual pipelines without switching tools. Previously, teams using ComfyUI for image or video generation had to manage separate environments. Now you can bring those workflows directly into chat contexts and execute them alongside your AI conversations.
|
||||
|
||||
## 📊 Model Library Updates
|
||||
Document export also expands to support PDF, making it simpler to archive conversations for compliance or share structured outputs with stakeholders who need offline copies.
|
||||
|
||||
Adds models such as Claude Haiku 4.5, GPT-5 Pro, MiniMax-M2, and Imagen 4 for Vertex AI.
|
||||
## Knowledge organization at scale
|
||||
|
||||
## 💫 Experience Improvements
|
||||
Managing large knowledge bases gets smoother with two practical additions. The new waterfall layout makes scanning extensive file collections more efficient when you are browsing hundreds of resources. And automatic archive extraction during upload removes the manual step of unzipping before importing—drop a ZIP file and the system handles the rest. These changes reduce friction when onboarding team knowledge or maintaining large reference libraries.
|
||||
|
||||
Improves rich text link rendering, enhances search experience, supports disabling rich text editing, adds hotkeys for delete and regenerate, and improves update notifications.
|
||||
## Provider and model coverage
|
||||
|
||||
New provider support includes **Cerebras** and **CometAPI**, while image generation expands with **SiliconFlow** and **Hunyuan Text-to-Image 3**. The model library adds **Claude Haiku 4.5**, **GPT-5 Pro**, **MiniMax-M2**, and **Imagen 4** for Vertex AI.
|
||||
|
||||
## Improvements and fixes
|
||||
|
||||
- Improved rich text link rendering
|
||||
- Enhanced search experience
|
||||
- Added option to disable rich text editing
|
||||
- Added hotkeys for delete and regenerate actions
|
||||
- Refined update notification behavior
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: ComfyUI 集成与知识库优化 ⭐
|
||||
description: 集成 ComfyUI 工作流,新增多个 AI 服务商与模型支持,并持续优化知识库与用户体验
|
||||
description: 在 LobeHub 中直接运行 ComfyUI 可视化工作流,通过瀑布流布局和自动解压更高效地组织知识,并支持将对话导出为 PDF。
|
||||
tags:
|
||||
- AI 知识库
|
||||
- 工作流
|
||||
|
|
@ -9,20 +9,26 @@ tags:
|
|||
|
||||
# ComfyUI 集成与知识库优化 ⭐
|
||||
|
||||
LobeHub 在十月至十一月集成 ComfyUI 工作流,新增多个 AI 服务商与模型支持,并持续优化知识库与用户体验。
|
||||
十月到十一月,LobeHub 聚焦于让工作流驱动的创作更实用。这次更新将 ComfyUI 可视化流程引入对话场景,优化了团队知识组织方式,并让多格式输出分享更便捷。
|
||||
|
||||
## 🌟 重要更新
|
||||
## 可视化工作流与文档导出
|
||||
|
||||
- 🎨 ComfyUI 集成:支持 ComfyUI 工作流集成
|
||||
- 🤖 新增服务商:Cerebras、CometAPI 等服务商支持
|
||||
- 📄 PDF 导出:支持将对话导出为 PDF 格式
|
||||
- 🗂️ 知识库优化:新增瀑布流布局、支持上传时自动解压文件
|
||||
- 🖼️ 图像生成扩展:支持硅基流动、混元 Text-to-Image 3 等图像生成服务
|
||||
ComfyUI 工作流现已集成到 LobeHub,无需切换工具即可运行现有的可视化流程。以往使用 ComfyUI 进行图像或视频生成的团队需要维护独立环境,现在可以直接在对话上下文中引入并执行这些工作流。
|
||||
|
||||
## 📊 模型库更新
|
||||
文档导出能力也扩展至 PDF 格式,便于将对话归档以满足合规需求,或向需要离线副本的利益相关者分享结构化输出。
|
||||
|
||||
新增 Claude Haiku 4.5、GPT-5 Pro、MiniMax-M2、Imagen 4 for Vertex AI 等模型
|
||||
## 规模化知识组织
|
||||
|
||||
## 💫 体验优化
|
||||
管理大型知识库的体验因两项实用改进而更加顺畅。新增的瀑布流布局让浏览海量文件资源时更高效,自动解压功能则省去了上传前手动解压的步骤 —— 直接拖拽 ZIP 文件,系统自动处理剩余工作。这些改进降低了团队知识导入和维护大型参考资料库时的摩擦。
|
||||
|
||||
优化富文本链接显示、改进搜索体验、支持禁用富文本编辑、新增删除与重新生成快捷键、改进更新通知
|
||||
## 服务商与模型覆盖
|
||||
|
||||
新增服务商支持包括 **Cerebras** 与 **CometAPI**,图像生成能力扩展至**硅基流动**与**混元 Text-to-Image 3**。模型库新增 **Claude Haiku 4.5**、**GPT-5 Pro**、**MiniMax-M2** 及 Vertex AI 的 **Imagen 4**。
|
||||
|
||||
## 体验优化与修复
|
||||
|
||||
- 优化富文本链接渲染
|
||||
- 改进搜索体验
|
||||
- 新增禁用富文本编辑的选项
|
||||
- 新增删除与重新生成快捷键
|
||||
- 完善更新通知行为
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
title: "MCP Cloud Endpoints and Model Library Expansion \U0001F50C"
|
||||
description: Adds multiple AI providers and improves knowledge base capabilities.
|
||||
description: >-
|
||||
Connect to managed MCP tools from the marketplace without self-hosting, while new providers and knowledge base pages improve daily workflows.
|
||||
tags:
|
||||
- MCP
|
||||
- LobeHub
|
||||
|
|
@ -10,17 +11,24 @@ tags:
|
|||
|
||||
# MCP Cloud Endpoints and Model Library Expansion 🔌
|
||||
|
||||
In November, LobeHub continued to improve model support and user experience by adding multiple AI providers and enhancing knowledge base capabilities.
|
||||
In November, LobeHub continued improving model support and day-to-day usability. This update expands the MCP tool ecosystem with managed cloud endpoints, broadens provider coverage, and improves how teams organize knowledge and authenticate across platforms.
|
||||
|
||||
## 🌟 Key Updates
|
||||
## Managed MCP tools and new providers
|
||||
|
||||
- 🔌 MCP cloud endpoints: supports integrating marketplace cloud-endpoint MCPs to expand the tool ecosystem
|
||||
- 🤖 New providers: adds support for multiple AI providers including ZenMux, Nano Banana Pro, Qiniu Cloud, and more
|
||||
- 📚 Knowledge base enhancements: supports creating pages, improves file management, and enhances RAG search experience
|
||||
- 🎨 Image generation: adds support for more image models and improves image generation configuration
|
||||
- 🔐 Auth improvements: improves OIDC authentication flow and desktop login experience
|
||||
- 💬 Conversation improvements: supports topic hyperlinks and improves message editing and deletion
|
||||
MCP cloud endpoints now support marketplace integrations, removing the need to self-host every tool. Instead of maintaining infrastructure for each capability, you can connect to managed endpoints directly from the marketplace and expand your available tools from a single place.
|
||||
|
||||
## 💫 Experience Improvements
|
||||
New AI provider support includes **ZenMux**, **Nano Banana Pro**, and **Qiniu Cloud**, giving teams more options when routing requests or evaluating new models.
|
||||
|
||||
Improves topic list interactions, enhances tool call display, refines rich text editing, optimizes token usage statistics animations, and improves model selector sorting.
|
||||
## Knowledge organization and authentication
|
||||
|
||||
The knowledge base now supports creating pages, making it easier to organize information hierarchically rather than relying on flat file structures. File management and RAG search improvements make day-to-day lookup more reliable.
|
||||
|
||||
OIDC authentication flow and desktop login experience are refined for smoother access across devices. Conversation workflows also gain topic hyperlink support and improved message editing and deletion behavior.
|
||||
|
||||
## Improvements and fixes
|
||||
|
||||
- Improved topic list interactions
|
||||
- Enhanced tool call display
|
||||
- Refined rich text editing
|
||||
- Optimized token usage statistics animations
|
||||
- Improved model selector sorting
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: "MCP 云端点与模型库扩展 \U0001F50C"
|
||||
description: 新增多个 AI 服务商,并改进知识库功能。
|
||||
description: 无需自行托管即可从市场连接托管 MCP 工具,新增服务商与知识库页面能力优化日常使用。
|
||||
tags:
|
||||
- MCP
|
||||
- LobeHub
|
||||
|
|
@ -10,17 +10,24 @@ tags:
|
|||
|
||||
# MCP 云端点与模型库扩展 🔌
|
||||
|
||||
LobeHub 在十一月持续优化模型支持与用户体验,新增多个 AI 服务商,并改进知识库功能。
|
||||
十一月,LobeHub 持续优化模型支持与日常使用体验。本次更新通过托管云端点扩展了 MCP 工具生态,补充了服务商覆盖,并改进了团队知识组织与跨平台认证体验。
|
||||
|
||||
## 🌟 重要更新
|
||||
## 托管 MCP 工具与新增服务商
|
||||
|
||||
- 🔌 MCP 云端点:支持市场云端点 MCP 集成,扩展工具生态
|
||||
- 🤖 新增服务商:支持 ZenMux、Nano Banana Pro、七牛云等多个 AI 服务商
|
||||
- 📚 知识库增强:支持创建页面、优化文件管理,改进 RAG 搜索体验
|
||||
- 🎨 图像生成:新增多个图像模型支持,优化图像生成配置
|
||||
- 🔐 认证优化:改进 OIDC 认证流程,优化桌面端登录体验
|
||||
- 💬 对话优化:支持话题超链接、改进消息编辑与删除功能
|
||||
MCP 云端点现已支持市场集成,无需为每个工具自行托管维护基础设施。你可以直接从市场连接托管端点,在一处扩展可用工具集。
|
||||
|
||||
## 💫 体验优化
|
||||
新增 AI 服务商支持包括 **ZenMux**、**Nano Banana Pro** 与**七牛云**,为请求路由和新模型评估提供更多选择。
|
||||
|
||||
优化话题列表交互、改进工具调用显示、完善富文本编辑、优化 Token 使用统计动画、改进模型选择器排序
|
||||
## 知识组织与认证优化
|
||||
|
||||
知识库现已支持创建页面,便于以层级结构组织信息,不再局限于扁平的文件结构。文件管理与 RAG 搜索体验同步改进,让日常查阅更可靠。
|
||||
|
||||
OIDC 认证流程与桌面端登录体验得到优化,跨设备访问更顺畅。对话流程新增话题超链接支持,并改进了消息编辑与删除交互。
|
||||
|
||||
## 体验优化与修复
|
||||
|
||||
- 优化话题列表交互
|
||||
- 改进工具调用显示
|
||||
- 完善富文本编辑
|
||||
- 优化 Token 使用统计动画
|
||||
- 改进模型选择器排序
|
||||
|
|
|
|||
|
|
@ -12,19 +12,24 @@ tags:
|
|||
|
||||
# LobeHub v2.0 🎉
|
||||
|
||||
January marks the landmark release of LobeHub v2.0, introducing powerful multi-agent group chat capabilities, refined model settings, and a streamlined authentication experience.
|
||||
LobeHub v2.0 marks a practical shift from single-agent chat to real multi-agent collaboration. It also improves model configuration, introduces SSO-only mode, and brings desktop polish so day-to-day work feels steadier.
|
||||
|
||||
## What's New
|
||||
## Group Chat for multi-agent work
|
||||
|
||||
- A major version upgrade with redesigned architecture and enhanced features
|
||||
- Multi-Agent Collaboration: Bring multiple specialized agents into one conversation. They debate, reason, and solve complex problems together—faster and smarter.
|
||||
- Agent Builder: Describe what you want, and LobeHub builds the complete agent—skills, behavior, tools, and personality. No setup required.
|
||||
- Pages: write, read and organize documents with Lobe AI
|
||||
- Memory: Your agents remember your preferences, style, goals, and past projects—delivering uniquely personalized assistance that gets better over time.
|
||||
- New Knowledge Base: Use folders to organize your knowledge & resource
|
||||
- Marketplace: Publish, adopt, or remix agents in a thriving community where intelligence grows together.
|
||||
The headline capability is multi-agent group chat. Instead of asking one Agent to handle every step, you can run multiple specialized Agents in one conversation and have them work around the same goal.
|
||||
|
||||
## Improvement
|
||||
## Build and evolve Agents faster
|
||||
|
||||
- Enhanced model settings: New ExtendParamsTypeSchema for more flexible model configuration
|
||||
- Model updates: Updated Kimi K2.5 and Qwen3 Max Thinking models, plus Gemini 2.5 streaming fixes
|
||||
v2.0 also makes Agent creation easier. With Agent Builder, you can describe your goal in plain language and generate an Agent with tools, behaviors, skills, and personality in one flow.
|
||||
|
||||
After creation, you can publish to the marketplace, remix from community templates, or adopt an Agent directly into your Workspace and keep iterating.
|
||||
|
||||
## Knowledge and workflow continuity
|
||||
|
||||
Knowledge workflows are more complete. You can write and organize documents with Pages, manage resources with folder-based knowledge organization, and use Agent Memory to carry forward preferences, style, goals, and prior project context.
|
||||
|
||||
## Improvements and fixes
|
||||
|
||||
- Added `ExtendParamsTypeSchema` for more flexible model configuration.
|
||||
- Updated Kimi K2.5 and Qwen3 Max Thinking.
|
||||
- Fixed Gemini 2.5 streaming behavior.
|
||||
|
|
|
|||
|
|
@ -12,19 +12,24 @@ tags:
|
|||
|
||||
# LobeHub v2.0 🎉
|
||||
|
||||
LobeHub v2.0 正式发布,带来强大的多智能体群聊功能、优化的模型设置以及简化的身份验证体验。
|
||||
LobeHub v2.0 的核心变化,是从单智能体对话走向真正的多智能体协作。版本同时带来更灵活的模型配置、SSO-only 模式和桌面端打磨,让日常使用更稳定、更顺手。
|
||||
|
||||
## 新功能
|
||||
## 用群聊完成多智能体协作
|
||||
|
||||
- 重大版本升级,架构重新设计,功能增强
|
||||
- 多智能体协作:将多个专业智能体汇聚于同一对话中。它们可以共同讨论、推理并解决复杂问题,速度更快、更智能。
|
||||
- 智能体构建器:描述您的需求,LobeHub 将构建完整的智能体 —— 包括技能、行为、工具和个性。无需任何设置。
|
||||
- 页面:使用 Lobe AI 编写、阅读和整理文档
|
||||
- 记忆:您的智能体会记住您的偏好、风格、目标和过往项目,提供个性化的专属帮助,并随着时间的推移不断优化。
|
||||
- 全新知识库:使用文件夹整理您的知识和资源
|
||||
- 应用市场:在一个蓬勃发展的社区中发布、采用或重新组合智能体,共同提升智能水平。
|
||||
这次最核心的升级是多智能体群聊。你不再需要让一个 Agent 包办所有任务,而是可以在同一会话里让多个专业 Agent 围绕同一个目标一起讨论和推理。
|
||||
|
||||
## 改进
|
||||
## 更快创建并迭代 Agent
|
||||
|
||||
- 增强模型设置:新增 ExtendParamsTypeSchema,实现更灵活的模型配置
|
||||
- 模型更新:更新了 Kimi K2.5 和 Qwen3 Max Thinking 模型,并修复了 Gemini 2.5 的流式传输问题
|
||||
在创建和迭代 Agent 这件事上,v2.0 明显更轻。你可以通过智能体构建器直接描述目标,一次生成包含工具、行为、技能和个性的完整 Agent。
|
||||
|
||||
创建后,你可以发布到应用市场、采用社区模板,或把 Agent 直接纳入你的 Workspace 持续改造。
|
||||
|
||||
## 更连贯的知识与工作流
|
||||
|
||||
知识工作流也更完整了。你可以在 Pages 里编写和整理文档,用文件夹组织知识与资源,并通过 Agent Memory 延续偏好、风格、目标和历史项目上下文,让后续协作更连贯。
|
||||
|
||||
## 体验优化与修复
|
||||
|
||||
- 新增 `ExtendParamsTypeSchema`,模型配置更灵活。
|
||||
- 更新 Kimi K2.5 与 Qwen3 Max Thinking。
|
||||
- 修复 Gemini 2.5 的流式传输问题。
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
---
|
||||
title: "Model Runtime & Authentication Improvements \U0001F527"
|
||||
description: >-
|
||||
Enhanced model runtime with Claude Opus 4.6 on Bedrock, improved
|
||||
authentication flows, and better mobile experience.
|
||||
Improves model runtime reliability, authentication stability, and mobile
|
||||
experience, including Claude Opus 4.6 support on Bedrock.
|
||||
tags:
|
||||
- Model Runtime
|
||||
- Authentication
|
||||
|
|
@ -12,17 +12,21 @@ tags:
|
|||
|
||||
# Model Runtime & Authentication Improvements 🔧
|
||||
|
||||
In February, LobeHub focused on model runtime enhancements, authentication reliability, and polishing the overall user experience across platforms.
|
||||
This release focuses on runtime reliability and sign-in stability across web and mobile. It reduces authentication friction in daily use and makes model execution paths more dependable.
|
||||
|
||||
## 🌟 Key Updates
|
||||
## Runtime and Authentication Updates
|
||||
|
||||
- 🤖 Claude Opus 4.6 on Bedrock: Added Claude Opus 4.6 support for AWS Bedrock runtime
|
||||
- 📓 Notebook tool: Registered Notebook tool in server runtime with improved system prompts
|
||||
- 🔗 OpenAI Responses API: Added end-user info support on OpenAI Responses API calls
|
||||
- 🔐 Auth improvements: Fixed Microsoft authentication, improved OIDC provider account linking, and enhanced Feishu SSO
|
||||
- 📱 Mobile enhancements: Enabled vertical scrolling for topic list on mobile, fixed multimodal image rendering
|
||||
- 🏗️ Runtime refactoring: Extracted Anthropic factory and converted Moonshot to RouterRuntime
|
||||
On the runtime side, Bedrock now supports Claude Opus 4.6. The server runtime also includes a better-integrated Notebook tool with improved prompts, and OpenAI Responses API calls now support end-user information when needed.
|
||||
|
||||
## 💫 Experience Improvements
|
||||
Authentication was strengthened across providers. Microsoft authentication issues were fixed, OIDC account linking is now more reliable, and Feishu SSO behavior was improved to reduce sign-in friction.
|
||||
|
||||
Improved tasks display, enhanced local-system tool implementation, fixed PDF parsing in Docker, fixed editor content loss on send error, added custom avatars for group chat sidebar, and showed notifications for file upload storage limit errors.
|
||||
This update also includes cross-platform and architecture polish. Topic lists now scroll vertically on mobile, multimodal image rendering issues are fixed, Anthropic factory logic has been extracted for cleaner runtime structure, and Moonshot has been migrated to RouterRuntime.
|
||||
|
||||
## Improvements and fixes
|
||||
|
||||
- Improved task display behavior.
|
||||
- Enhanced local-system tool implementation.
|
||||
- Fixed PDF parsing in Docker.
|
||||
- Fixed editor content loss after send errors.
|
||||
- Added custom avatars in the group chat sidebar.
|
||||
- Added notifications for file upload storage limit errors.
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: "模型运行时与认证改进 \U0001F527"
|
||||
description: 增强模型运行时并支持 Bedrock 上的 Claude Opus 4.6,改进认证流程,优化移动端体验。
|
||||
description: 提升模型运行时稳定性、认证链路可靠性与移动端体验,并支持 Bedrock 上的 Claude Opus 4.6。
|
||||
tags:
|
||||
- 模型运行时
|
||||
- 认证
|
||||
|
|
@ -10,17 +10,21 @@ tags:
|
|||
|
||||
# 模型运行时与认证改进 🔧
|
||||
|
||||
二月,LobeHub 专注于模型运行时增强、认证可靠性提升,以及跨平台用户体验的打磨优化。
|
||||
这次发布聚焦在运行时可靠性与登录稳定性,覆盖 Web 和移动端。整体目标是减少认证过程中的中断,让模型调用链路在日常使用里更稳。
|
||||
|
||||
## 🌟 重要更新
|
||||
## 运行时与认证更新
|
||||
|
||||
- 🤖 Bedrock 上的 Claude Opus 4.6:新增 AWS Bedrock 运行时对 Claude Opus 4.6 的支持
|
||||
- 📓 笔记本工具:在服务端运行时注册笔记本工具,改进系统提示词
|
||||
- 🔗 OpenAI Responses API:支持在 OpenAI Responses API 调用中添加终端用户信息
|
||||
- 🔐 认证改进:修复 Microsoft 认证、改进 OIDC 提供商账户关联、增强飞书 SSO
|
||||
- 📱 移动端增强:启用话题列表垂直滚动,修复多模态图像渲染
|
||||
- 🏗️ 运行时重构:提取 Anthropic 工厂,将 Moonshot 转换为 RouterRuntime
|
||||
在运行时能力上,Bedrock 已支持 Claude Opus 4.6。服务端也进一步完善了 Notebook 工具接入并优化提示词,同时 OpenAI Responses API 请求已支持按需携带终端用户信息。
|
||||
|
||||
## 💫 体验优化
|
||||
认证链路方面进行了集中加固:Microsoft 登录问题已修复,OIDC 账户关联稳定性提升,飞书 SSO 行为也得到优化,登录摩擦进一步降低。
|
||||
|
||||
改进任务展示、增强本地系统工具实现、修复 Docker 中的 PDF 解析、修复发送错误时编辑器内容丢失、为群聊侧边栏添加自定义头像,以及在文件上传超出存储限制时显示通知。
|
||||
此外,这个版本还包含跨端与架构层面的打磨:移动端话题列表已支持垂直滚动,多模态图片渲染问题已修复;运行时内部提取了 Anthropic 工厂逻辑,并将 Moonshot 迁移到 RouterRuntime。
|
||||
|
||||
## 体验优化与修复
|
||||
|
||||
- 改进任务展示体验。
|
||||
- 增强本地系统工具实现。
|
||||
- 修复 Docker 环境下的 PDF 解析问题。
|
||||
- 修复发送失败后编辑器内容丢失的问题。
|
||||
- 为群聊侧边栏增加自定义头像支持。
|
||||
- 文件上传超出存储限制时显示明确通知。
|
||||
|
|
|
|||
|
|
@ -12,16 +12,18 @@ tags:
|
|||
|
||||
# Search Optimization & Agent Documents 🔍
|
||||
|
||||
In March, LobeHub significantly enhanced its search infrastructure and introduced agent document capabilities, laying the groundwork for smarter knowledge retrieval.
|
||||
This release focuses on one practical goal: making knowledge easier to find and manage. Search now returns results with better speed and relevance, and agent-level documents sit on a clearer storage foundation.
|
||||
|
||||
## 🌟 Key Updates
|
||||
## Search and storage foundation
|
||||
|
||||
- 🔍 BM25 search indexes: Added BM25 indexes with ICU tokenizer for optimized full-text search
|
||||
- 📄 Agent documents: Introduced the `agent_documents` table for agent-level knowledge storage
|
||||
- 🗄️ pg\_search extension: Enabled the `pg_search` PostgreSQL extension for advanced search capabilities
|
||||
- 📝 Topic descriptions: Added description column to the topics table for better topic organization
|
||||
- 🔑 API key security: Added API key hash column for enhanced security
|
||||
At the search layer, we added BM25 indexing with the ICU tokenizer and enabled PostgreSQL `pg_search`. Together, these upgrades give full-text retrieval a stronger base for both recall and ranking quality.
|
||||
|
||||
## 💫 Experience Improvements
|
||||
For agent knowledge organization, this release introduces the new `agent_documents` table and adds a description field to topic metadata. This makes content structure and retrieval context more explicit.
|
||||
|
||||
Fixed changelog auto-generation in release workflow, corrected stable renderer tar source path, and resolved market M2M token registration for trust client scenarios.
|
||||
On security hardening, we added API key hash storage to improve key handling safety.
|
||||
|
||||
## Improvements and fixes
|
||||
|
||||
- Fixed changelog auto-generation in the release workflow.
|
||||
- Corrected stable renderer tar source path.
|
||||
- Resolved market M2M token registration for trust-client scenarios.
|
||||
|
|
|
|||
|
|
@ -10,16 +10,18 @@ tags:
|
|||
|
||||
# 搜索优化与智能体文档 🔍
|
||||
|
||||
三月,LobeHub 大幅增强了搜索基础设施,并引入智能体文档功能,为更智能的知识检索奠定基础。
|
||||
这次更新聚焦一个实际问题:让知识更容易被找到和管理。搜索结果更快、更准,智能体级文档也建立了更清晰的存储基础。
|
||||
|
||||
## 🌟 重要更新
|
||||
## 搜索与存储基础能力
|
||||
|
||||
- 🔍 BM25 搜索索引:新增基于 ICU 分词器的 BM25 索引,优化全文检索
|
||||
- 📄 智能体文档:引入 `agent_documents` 表,支持智能体级别的知识存储
|
||||
- 🗄️ pg\_search 扩展:启用 `pg_search` PostgreSQL 扩展,提供高级搜索能力
|
||||
- 📝 话题描述:为话题表添加描述字段,改进话题组织管理
|
||||
- 🔑 API 密钥安全:新增 API 密钥哈希列,增强安全性
|
||||
在搜索层,我们新增了基于 ICU 分词器的 BM25 索引,并启用了 PostgreSQL `pg_search` 扩展。这两个升级一起强化了全文检索在召回与排序上的基础能力。
|
||||
|
||||
## 💫 体验优化
|
||||
在智能体知识组织方面,本次新增了 `agent_documents` 表,并为话题元数据补充了描述字段,让内容结构与检索上下文更明确。
|
||||
|
||||
修复发布工作流中的更新日志自动生成、修正稳定版渲染器打包路径,以及解决信任客户端场景下的市场 M2M 令牌注册问题。
|
||||
在安全加固方面,本版加入了 API 密钥哈希存储,进一步提升密钥处理安全性。
|
||||
|
||||
## 体验优化与修复
|
||||
|
||||
- 修复发布流程中的 changelog 自动生成问题。
|
||||
- 修正稳定版渲染器打包路径。
|
||||
- 解决信任客户端场景下的市场 M2M 令牌注册问题。
|
||||
|
|
|
|||
|
|
@ -11,14 +11,18 @@ tags:
|
|||
|
||||
# Image & Video Generation Redesign
|
||||
|
||||
This week LobeHub refreshed the image and video generation experience, making it easier to create and browse visual content.
|
||||
This update refines the visual creation flow, so moving between image and video work feels quicker and more natural.
|
||||
|
||||
## Key Updates
|
||||
## What’s New
|
||||
|
||||
- Image & video generation redesign: completely overhauled the generation interface with a new switch to easily toggle between image and video creation
|
||||
- Memory management: you can now delete all memory entries at once for a clean slate
|
||||
- Bot improvements: restructured bot internals for better reliability and extensibility
|
||||
The generation interface has been redesigned for faster switching between image and video creation. Instead of jumping between separate flows, you can now move between both media modes more directly.
|
||||
|
||||
Memory reset is also simpler now. When you need a clean context, you can clear all memory entries in one action.
|
||||
|
||||
Under the hood, the agent architecture was refactored to improve reliability and leave more room for future extensions.
|
||||
|
||||
## Experience Improvements
|
||||
|
||||
Fixed visual glitches in the compression view, improved mobile menu behavior, and corrected message count display accuracy.
|
||||
- Fixed visual glitches in compression view.
|
||||
- Improved mobile menu behavior.
|
||||
- Corrected message count display accuracy.
|
||||
|
|
|
|||
|
|
@ -9,14 +9,18 @@ tags:
|
|||
|
||||
# 图片与视频生成重设计
|
||||
|
||||
本周 LobeHub 全面升级了图片与视频生成体验,让创作和浏览视觉内容更加便捷。
|
||||
这次更新聚焦于优化视觉创作流程,让图片与视频生成之间的切换更自然、操作更高效。
|
||||
|
||||
## 重要更新
|
||||
## 更新内容
|
||||
|
||||
- 图片与视频生成重设计:全新的生成界面,新增图片 / 视频切换功能,轻松在两种创作模式间自由切换
|
||||
- 记忆管理:支持一键清除所有记忆条目,快速重置对话记忆
|
||||
- Bot 改进:重构 Bot 内部架构,提升可靠性和可扩展性
|
||||
这次首先重做了生成流程。图片与视频创作不再像两条割裂的路径,新界面让你在两种模式之间切换更直接。
|
||||
|
||||
记忆管理也变得更干净利落。需要重置上下文时,你现在可以一键清空全部记忆条目。
|
||||
|
||||
此外,Agent 内部架构也做了重构,重点是提升稳定性,并为后续能力扩展留出空间。
|
||||
|
||||
## 体验优化
|
||||
|
||||
修复压缩视图的显示异常,改进移动端菜单交互,修正消息计数显示的准确性。
|
||||
- 修复压缩视图显示异常。
|
||||
- 优化移动端菜单交互。
|
||||
- 修正消息计数显示准确性。
|
||||
|
|
|
|||
|
|
@ -1,21 +1,26 @@
|
|||
---
|
||||
title: Bot Management
|
||||
title: Agent Management
|
||||
description: >-
|
||||
Introduced in-app notifications, bot management, and improved onboarding
|
||||
experience.
|
||||
Introduced in-app notifications, stronger agent management, and a smoother
|
||||
onboarding experience.
|
||||
tags:
|
||||
- Agent Tasks
|
||||
- Bot Management
|
||||
- Agent Management
|
||||
- Notification
|
||||
- Onboarding
|
||||
---
|
||||
|
||||
# Bot Management & Notification
|
||||
# Agent Tasks and Agent Management
|
||||
|
||||
## Key Updates
|
||||
This update improves how teams adopt and run Agents day to day, especially when coordinating bots across multiple workflows.
|
||||
|
||||
- Notification system: receive important updates and alerts directly inside LobeHub
|
||||
- Bot management: manage your bots with custom rendering and richer content support
|
||||
- Agent onboarding: a new guided onboarding flow helps you get started with agents quickly
|
||||
- Skill-specific icons: slash menu commands now show distinct icons for each skill, making them easier to find
|
||||
- GitHub Copilot improvements: better vision support and overall compatibility with GitHub Copilot
|
||||
LobeHub now includes in-app notifications, so important updates and alerts appear directly in the product. Agent management is also more flexible, with stronger rendering support and richer content handling in bot experiences.
|
||||
|
||||
Getting started is smoother as well. A new guided onboarding path helps teams ramp up faster, and slash command discoverability improves with skill-specific icons. GitHub Copilot compatibility is also improved, including better vision-related behavior.
|
||||
|
||||
## Experience Improvements
|
||||
|
||||
- Moved the Marketplace entry below Resources in the sidebar for a cleaner layout.
|
||||
- Added a visual cue when AI generation is interrupted.
|
||||
- Fixed display issues when switching between topics.
|
||||
- Improved error handling with a more user-friendly fallback state.
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: 智能体任务系统与 Bot 管理
|
||||
description: 引入智能体任务系统、应用内通知、Bot 管理,以及改进的引导体验。
|
||||
title: 智能体任务系统与 Agent 管理
|
||||
description: 引入应用内通知、增强 Agent 管理能力,并优化整体引导体验。
|
||||
tags:
|
||||
- 智能体任务
|
||||
- Bot 管理
|
||||
|
|
@ -8,18 +8,17 @@ tags:
|
|||
- 引导
|
||||
---
|
||||
|
||||
# 智能体任务系统与 Bot 管理
|
||||
# 智能体任务系统与 Agent 管理
|
||||
|
||||
本周 LobeHub 带来了强大的智能体新功能和更流畅的上手体验。
|
||||
这次更新重点优化了 Agent 的上手和日常运营体验,尤其适合在多场景协作中使用 Bot 的团队。
|
||||
|
||||
## 重要更新
|
||||
LobeHub 现在支持应用内通知,重要更新和提醒可以直接在产品内收到。Agent 管理能力也进一步增强,在内容渲染和展示上更灵活,能覆盖更丰富的 Bot 使用场景。
|
||||
|
||||
- 通知系统:在 LobeHub 内直接接收重要更新和提醒
|
||||
- Bot 管理:支持管理你的 Bot,提供自定义渲染和更丰富的内容展示
|
||||
- 智能体引导:全新的引导流程帮助你快速上手智能体功能
|
||||
- 技能专属图标:斜杠菜单中的命令现在显示各技能的专属图标,更容易查找
|
||||
- GitHub Copilot 改进:提升视觉识别支持和与 GitHub Copilot 的整体兼容性
|
||||
上手流程也更顺了。新版引导可以帮助团队更快进入 Agent 工作流,斜杠菜单加入技能专属图标后,常用命令更容易定位。与此同时,GitHub Copilot 兼容性也进一步提升,包含视觉相关能力的改进。
|
||||
|
||||
## 体验优化
|
||||
|
||||
将市场入口移至侧边栏资源下方以优化布局,在 AI 生成被中断时添加可视化提示,修复话题切换时的显示异常,并改进错误处理以提供更友好的降级界面。
|
||||
- 将市场入口移至侧边栏「资源」下方,整体布局更清晰。
|
||||
- 在 AI 生成被中断时增加可视化提示。
|
||||
- 修复话题切换时的显示异常。
|
||||
- 改进错误处理,提供更友好的降级界面。
|
||||
|
|
|
|||
|
|
@ -12,16 +12,13 @@ tags:
|
|||
|
||||
# AI Auto-Completion & Real-Time Gateway
|
||||
|
||||
Smarter editing with AI suggestions, real-time messaging via WebSocket, and broader bot platform connectivity.
|
||||
This release focuses on removing small points of friction in writing and real-time collaboration.
|
||||
|
||||
## Key Updates
|
||||
The editor now supports AI auto-completion while you type, so drafting messages is faster and requires less context switching. On the delivery side, the new WebSocket-based Agent Gateway streams responses with lower latency, making live conversations feel more immediate.
|
||||
|
||||
- AI auto-completion: the editor now suggests completions as you type, helping you compose messages faster
|
||||
- Real-time gateway: a new WebSocket-based Agent Gateway streams responses in real time for lower-latency conversations
|
||||
- Bot platform expansion: Feishu / Lark, Slack, and QQ now support WebSocket connection mode for more reliable message delivery
|
||||
- @ mention context injection: skills and tools are now invoked via @ mentions with direct context injection, replacing the previous slash-command approach
|
||||
- Skill Store skills tab: the Skill Store now has a dedicated Skills tab for easier browsing
|
||||
- Automatic topic creation: new topics are created automatically every 4 hours to keep conversations organized
|
||||
Cross-platform bot connectivity is also broader. Feishu/Lark, Slack, and QQ now support WebSocket connection mode for more reliable message delivery. Context invocation is also simpler: skills and tools can be triggered with `@` mentions and direct context injection, replacing the older slash-command-heavy flow.
|
||||
|
||||
To keep navigation cleaner over time, the Skill Store now has a dedicated Skills tab, and topics are automatically created every four hours to keep conversations organized.
|
||||
|
||||
## Experience Improvements
|
||||
|
||||
|
|
|
|||
|
|
@ -10,20 +10,17 @@ tags:
|
|||
|
||||
# AI 自动补全与实时消息网关
|
||||
|
||||
更智能的 AI 自动补全编辑体验、基于 WebSocket 的实时消息网关,以及更广泛的 Bot 平台连接支持。
|
||||
这版更新聚焦在减少写作和实时协作中的细碎阻力,让整体体验更连贯。
|
||||
|
||||
## 重要更新
|
||||
编辑器现在支持 AI 自动补全,你在输入时就能收到建议,消息撰写更快、上下文切换更少。消息链路方面,全新的 WebSocket Agent 网关支持实时推送响应,整体对话延迟更低。
|
||||
|
||||
- AI 自动补全:编辑器现在会在你输入时智能推荐补全建议,帮助你更快地撰写消息
|
||||
- 实时消息网关:全新的基于 WebSocket 的 Agent 网关可实时推送响应,降低对话延迟
|
||||
- Bot 平台扩展:飞书、Slack 和 QQ 现已支持 WebSocket 连接模式,消息传递更加稳定可靠
|
||||
- @ 提及上下文注入:技能和工具现在通过 @ 提及调用并直接注入上下文,取代了之前的斜杠命令方式
|
||||
- 技能商店技能标签:技能商店新增专属的「技能」标签页,浏览更加便捷
|
||||
- 自动创建话题:每 4 小时自动创建新话题,保持对话井然有序
|
||||
Bot 连接能力也扩展到了更多平台。飞书、Slack 和 QQ 已支持 WebSocket 连接模式,消息传递更稳定。与此同时,上下文调用也更直接:通过 `@` 提及即可触发技能和工具,并完成直接上下文注入,逐步替代以斜杠命令为主的旧方式。
|
||||
|
||||
为了让长期使用时的导航更清晰,技能商店新增了专属「技能」标签页,系统也会每 4 小时自动创建新话题,帮助你持续整理会话上下文。
|
||||
|
||||
## 体验优化
|
||||
|
||||
- 智能体文档现在支持渐进式加载,在内容就绪时即时展示,不再阻塞整个页面
|
||||
- 助理文档现在支持渐进式加载,在内容就绪时即时展示,不再阻塞整个页面
|
||||
- 修复了图片生成按钮错误默认选择模型的问题
|
||||
- 优化了粘贴性能,防止在粘贴大量剪贴板内容时聊天输入框卡顿
|
||||
- 加强了安全性,清理了 HTML 工件并修复了一个认证绕过漏洞
|
||||
|
|
|
|||
|
|
@ -12,23 +12,22 @@ tags:
|
|||
|
||||
# Agent Gateway & Customizable Sidebar
|
||||
|
||||
Server-side agent execution over WebSocket, a fully customizable sidebar, and a new agent workspace for managing documents and tasks.
|
||||
This release focuses on making everyday Agent work more stable and easier to manage.
|
||||
|
||||
## Key Updates
|
||||
Agents can now run on the server through Gateway mode and stream results over WebSocket. When you switch topics or hit a short disconnect, sessions reconnect and resume more smoothly, so long-running execution is less likely to break.
|
||||
|
||||
- Gateway mode: agents now execute server-side and stream results back over WebSocket, with auto-reconnect when switching topics and seamless resume after disconnects
|
||||
- Customizable sidebar: choose which items appear in the sidebar and reorder them through a new customize modal, plus a recents section with search, rename, and quick actions
|
||||
- Agent workspace: a right-side panel for managing agent documents — browse, rename, delete files, and view document history all in one place
|
||||
- Task manager: a dedicated task manager view with its own topic state, so running tasks no longer interfere with your main conversations
|
||||
- Prompt rewrite & translate: rewrite or translate your prompt directly in the chat input before sending
|
||||
- Desktop CLI: the LobeHub CLI is now embedded in the desktop app and can be installed to your PATH from settings
|
||||
- Screen capture: capture your screen with an overlay picker and attach it directly to a conversation
|
||||
- New models: GLM-5.1 from Zhipu, Seedance 2.0 video generation, and a new StreamLake provider
|
||||
Navigation is now easier to shape around your own habits. You can choose which items appear in the sidebar, reorder them in a dedicated customization modal, and use a stronger Recents experience with search, rename, and quick actions.
|
||||
|
||||
## Experience Improvements
|
||||
Document and task workflows are now more centralized. A dedicated right-side workspace gives you one place to browse, rename, delete, and review history for Agent documents. Running tasks move into an isolated task manager view with independent topic state, so your main conversations stay focused.
|
||||
|
||||
- Desktop app now uses Electron's native fetch for remote requests, improving connection reliability
|
||||
- Loading states during optimistic updates prevent flickering when the assistant is thinking
|
||||
- Agent details pages load correctly on refresh instead of showing a perpetual spinner
|
||||
- Improved error classification for insufficient balance and deactivated accounts shows clearer messages
|
||||
- Fixed a context engine crash when non-string content was passed to document injection
|
||||
This update also improves several high-frequency input and tooling actions. You can rewrite or translate prompts directly in chat input before sending, capture screen content with an overlay picker and attach it in one step, and use LobeHub CLI from desktop with one-click install to your system `PATH`.
|
||||
|
||||
On model coverage, this release adds Zhipu GLM-5.1, Seedance 2.0 video generation, and the StreamLake provider.
|
||||
|
||||
## Improvements and fixes
|
||||
|
||||
- Desktop now uses Electron native `fetch` for more reliable remote requests.
|
||||
- Optimistic loading states reduce streaming flicker while the assistant is thinking.
|
||||
- Agent detail pages now load correctly after refresh instead of staying in a spinner state.
|
||||
- Error classification now gives clearer messages for insufficient balance and deactivated accounts.
|
||||
- Fixed a context engine crash path caused by non-string content in document injection.
|
||||
|
|
|
|||
|
|
@ -10,23 +10,22 @@ tags:
|
|||
|
||||
# Agent 网关与可自定义侧边栏
|
||||
|
||||
通过 WebSocket 实现服务端智能体执行、完全可自定义的侧边栏,以及用于管理文档和任务的全新智能体工作区。
|
||||
这次更新聚焦在两件事:让日常 Agent 协作更稳定,也让操作路径更集中。
|
||||
|
||||
## 重要更新
|
||||
Agent 现在可以通过网关模式在服务端运行,并通过 WebSocket 流式返回结果。切换话题或短暂断线后,会更顺畅地自动重连并恢复会话,长流程执行不再那么容易中断。
|
||||
|
||||
- 网关模式:智能体现在在服务端执行并通过 WebSocket 实时推送结果,切换话题时自动重连,断线后无缝恢复
|
||||
- 可自定义侧边栏:通过新的自定义弹窗选择侧边栏显示哪些项目并调整排序,还新增了支持搜索、重命名和快捷操作的「最近」板块
|
||||
- 智能体工作区:右侧面板用于管理智能体文档 —— 在同一界面中浏览、重命名、删除文件并查看文档历史
|
||||
- 任务管理器:专属的任务管理视图拥有独立的话题状态,运行中的任务不再干扰你的主要对话
|
||||
- 提示词改写与翻译:发送前可直接在聊天输入框中改写或翻译你的提示词
|
||||
- 桌面端 CLI:LobeHub CLI 现已内嵌在桌面应用中,可从设置中安装到系统 PATH
|
||||
- 屏幕截图:使用覆盖层选择器截取屏幕内容,直接附加到对话中
|
||||
- 新模型:智谱 GLM-5.1、Seedance 2.0 视频生成,以及新的 StreamLake 提供商
|
||||
导航也更贴合个人习惯。你可以在专用弹窗里选择侧边栏显示项并调整顺序;「最近」板块也补齐了搜索、重命名和快捷操作,日常切换会更快。
|
||||
|
||||
## 体验优化
|
||||
文档与任务这类高频操作也更集中。新增的右侧工作区可以在一处完成 Agent 文档的浏览、重命名、删除和历史查看。运行中的任务则进入独立任务视图,并使用独立话题状态,不再打断主对话。
|
||||
|
||||
- 桌面应用现使用 Electron 原生 fetch 进行远程请求,提升连接稳定性
|
||||
- 乐观更新时的加载状态防止了助手思考时的界面闪烁
|
||||
- 智能体详情页在刷新后正确加载,不再显示无限加载动画
|
||||
- 改进了余额不足和账户停用的错误分类,展示更清晰的提示信息
|
||||
- 修复了非字符串内容传入文档注入时的上下文引擎崩溃问题
|
||||
另外,这版也优化了几项高频输入与工具操作。提示词可在发送前直接改写或翻译;屏幕截图支持覆盖层选择并一步附加到对话;LobeHub CLI 已内嵌到桌面应用,并可在设置中一键安装到系统 `PATH`。
|
||||
|
||||
模型覆盖方面,本次新增智谱 GLM-5.1、Seedance 2.0 视频生成能力,以及 StreamLake 提供商。
|
||||
|
||||
## 体验优化与修复
|
||||
|
||||
- 桌面端现使用 Electron 原生 `fetch` 发起远程请求,连接更稳定。
|
||||
- 乐观更新的加载状态减少了助手思考阶段的界面闪烁。
|
||||
- Agent 详情页刷新后可正常加载,不再长期停留在加载动画。
|
||||
- 余额不足与账户停用场景的错误分类更准确,提示信息更清晰。
|
||||
- 修复了非字符串内容进入文档注入链路时触发的上下文引擎崩溃。
|
||||
|
|
|
|||
|
|
@ -128,6 +128,13 @@ export class LocalSystemExecutionRuntime extends ComputerRuntime {
|
|||
return { fullContent: params.fullContent, loc, path: params.path };
|
||||
}
|
||||
|
||||
case 'globLocalFiles': {
|
||||
return {
|
||||
pattern: params.pattern,
|
||||
scope: params.directory,
|
||||
};
|
||||
}
|
||||
|
||||
default: {
|
||||
return params;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,39 @@
|
|||
import { beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
import { localSystemExecutor } from './index';
|
||||
|
||||
const { globFilesMock } = vi.hoisted(() => ({
|
||||
globFilesMock: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('@/services/electron/localFileService', () => ({
|
||||
localFileService: {
|
||||
globFiles: globFilesMock,
|
||||
},
|
||||
}));
|
||||
|
||||
describe('LocalSystemExecutor', () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
describe('globLocalFiles', () => {
|
||||
it('should preserve scope and relative pattern when delegating glob search', async () => {
|
||||
globFilesMock.mockResolvedValue({
|
||||
files: ['/tmp/images/a.png'],
|
||||
success: true,
|
||||
total_files: 1,
|
||||
});
|
||||
|
||||
await localSystemExecutor.globLocalFiles({
|
||||
pattern: '**/*.{png,jpg,jpeg,gif,webp}',
|
||||
scope: '/tmp/images',
|
||||
});
|
||||
|
||||
expect(globFilesMock).toHaveBeenCalledWith({
|
||||
pattern: '**/*.{png,jpg,jpeg,gif,webp}',
|
||||
scope: '/tmp/images',
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
|
|
@ -219,9 +219,9 @@ class LocalSystemExecutor extends BaseExecutor<typeof LocalSystemApiEnum> {
|
|||
|
||||
globLocalFiles = async (params: GlobFilesParams): Promise<BuiltinToolResult> => {
|
||||
try {
|
||||
const resolvedParams = resolveArgsWithScope(params, 'pattern');
|
||||
const result = await this.runtime.globFiles({
|
||||
pattern: resolvedParams.pattern,
|
||||
directory: params.scope,
|
||||
pattern: params.pattern,
|
||||
});
|
||||
return this.toResult(result);
|
||||
} catch (error) {
|
||||
|
|
|
|||
Loading…
Reference in a new issue