mirror of
https://github.com/lobehub/lobehub
synced 2026-04-21 09:37:28 +00:00
* 🔨 chore: update .vscode/settings.json (#13894) * 🐛 fix(builtin-tool-local-system): honor glob scope in local system tool (#13875) Made-with: Cursor * 📝 docs: Update changelog docs and release skills (#13897) - Update changelog documentation format across all historical changelog files - Merge release-changelog-style skill into version-release skill - Update changelog examples with improved formatting and structure Made-with: Cursor --------- Co-authored-by: YuTengjing <ytj2713151713@gmail.com> Co-authored-by: Innei <i@innei.in>
47 lines
1.9 KiB
Text
47 lines
1.9 KiB
Text
---
|
|
title: 'LobeHub v1.6: GPT-4o Mini Joins the Default Lineup'
|
|
description: >-
|
|
LobeHub v1.6 adds GPT-4o mini support, while LobeHub Cloud upgrades its
|
|
default model to GPT-4o mini for stronger out-of-the-box conversations.
|
|
tags:
|
|
- LobeHub
|
|
- GPT-4o Mini
|
|
- AI Conversation
|
|
- Cloud Service
|
|
---
|
|
|
|
# LobeHub v1.6: GPT-4o Mini Joins the Default Lineup
|
|
|
|
OpenAI's full model family has moved to GPT-4. LobeHub v1.6 follows that shift, adding GPT-4o mini to the supported models. For LobeHub Cloud users, this upgrade goes further: GPT-4o mini is now the default, replacing GPT-3.5-turbo.
|
|
|
|
The result is stronger conversations from your first message, without any configuration changes.
|
|
|
|
## GPT-4o Mini: Capable and Cost-Effective
|
|
|
|
GPT-4o mini brings GPT-4-level intelligence at a smaller scale. It's fast enough for real-time interactions and capable enough for most everyday tasks—drafting, analysis, coding help, and creative work.
|
|
|
|
Use GPT-4o mini when you want:
|
|
|
|
- Better reasoning than GPT-3.5 without the latency of full GPT-4o
|
|
- A cost-effective default for high-volume conversations
|
|
- Strong performance on instruction following and tool use
|
|
|
|
Switch to full GPT-4o or other providers (Claude 3.5 Sonnet, Gemini 1.5 Pro) when you need maximum capability for complex reasoning tasks.
|
|
|
|
## Cloud Service: Upgraded Defaults
|
|
|
|
For LobeHub Cloud users, the service upgrade is automatic. New conversations start with GPT-4o mini by default. Existing users don't need to change any settings—the model switcher simply shows the new default first.
|
|
|
|
Cloud now supports:
|
|
|
|
- GPT-4o mini (default)
|
|
- GPT-4o
|
|
- Claude 3.5 Sonnet
|
|
- Gemini 1.5 Pro
|
|
|
|
## Improvements and fixes
|
|
|
|
- Added GPT-4o mini model configuration and parameter defaults
|
|
- Updated LobeHub Cloud default model selection logic
|
|
- Improved model switcher UI to highlight recommended options
|
|
- Fixed edge cases in streaming responses for newer OpenAI models
|