Unsloth Studio is a web UI for training and running open models like Gemma 4, Qwen3.5, DeepSeek, gpt-oss locally.
Find a file
Roland Tannous 800ddc95f8
Re-apply #4939: updated models template mappers (#4950)
* Reapply "updated models template mappers. added lfm2.5vl450m to transformers 5…" (#4945)

This reverts commit 33503ea248.

* Add missing gemma-4-31B-it bnb-4bit mapper entry and LFM2.5 upstream namespace for PR #4950

- Add unsloth/gemma-4-31B-it-unsloth-bnb-4bit to __INT_TO_FLOAT_MAPPER so
  the int-to-float resolution works for this model (already listed in
  TEMPLATE_TO_MODEL_MAPPER but had no mapper entry).
- Add LiquidAI/LFM2.5-1.2B-Instruct to lfm-2.5 TEMPLATE_TO_MODEL_MAPPER
  entry so the canonical upstream namespace is mapped consistently with lfm-2.

* Add missing gemma-4-31B-it bnb-4bit Ollama mapping and lfm-2.5 chat template alias

- Add unsloth/gemma-4-31B-it-unsloth-bnb-4bit to OLLAMA_TEMPLATE_TO_MODEL_MAPPER
  so Ollama export works for this model (E2B-it and E4B-it bnb-4bit variants were
  already present, 31B-it was inconsistently omitted)
- Register CHAT_TEMPLATES["lfm-2.5"] as alias of the lfm-2 template to prevent
  KeyError when Studio resolves LFM2.5 models through MODEL_TO_TEMPLATE_MAPPER

* Add missing LFM2 bnb-4bit INT_TO_FLOAT_MAPPER entry

unsloth/LFM2-1.2B-unsloth-bnb-4bit is referenced in model_mappings.py
but had no mapper.py entry, so model resolution would fail when users
load that variant with load_in_4bit=False or when the float name is
used with load_in_4bit=True.

* Fix review findings for PR #16

1. ollama_template_mappers.py: Restore dropped Gemma-4 base model IDs
   (E2B, E4B, 31B, 26B-A4B) and add missing google/ upstream IDs to
   the gemma4 Ollama mapper for consistency with other gemma entries.

2. mapper.py: Remove self-mapping non-bnb-4bit entries from
   __INT_TO_FLOAT_MAPPER that were polluting FLOAT_TO_INT_MAPPER with
   lowercase 16-bit names, causing load_in_4bit=True to return bad
   model names. Add direct MAP_TO_UNSLOTH_16bit entries to preserve
   the google->unsloth 16-bit redirects.

3. mapper.py: Add LFM2.5 MAP_TO_UNSLOTH_16bit redirect so
   LiquidAI/LFM2.5-1.2B-Instruct resolves to its unsloth mirror.

* Add review tests for PR #4950

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Remove top-level test files

These test_*.py files were added at the repo root rather than under tests/.
Removing them from this PR; the production mapper changes remain.

* Add gemma-4-26B-A4B-it mapping

Adds unsloth/gemma-4-26B-A4B-it to __INT_TO_FLOAT_MAPPER as a 2-tuple so
google/gemma-4-26B-A4B-it routes to unsloth/gemma-4-26B-A4B-it across
INT_TO_FLOAT_MAPPER, FLOAT_TO_INT_MAPPER, and MAP_TO_UNSLOTH_16bit.

The 26B-A4B (MoE) model has no bnb-4bit variant, so the key uses the
plain unsloth name rather than the -unsloth-bnb-4bit suffix.

Removes the now-redundant standalone _add_with_lower call for the -it
variant; the 16bit mapping is registered via the dict loop.

* Add unsloth-bnb-4bit mappings for gemma-4 base (non-it) models

Adds E2B, E4B, 31B base unsloth-bnb-4bit entries to __INT_TO_FLOAT_MAPPER.
The 26B-A4B (MoE) base has no bnb-4bit variant on HF, so it stays on the
standalone _add_with_lower line for the 16bit-only routing.

Removes the redundant _add_with_lower lines for E2B, E4B, 31B base since
the dict loop now registers the same google->unsloth route through the
2-tuple entries, plus full FLOAT_TO_INT and INT_TO_FLOAT coverage.

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2026-04-15 07:52:12 -07:00
.github Update dependabot.yml (#4915) 2026-04-08 03:39:50 -07:00
images Add files via upload 2026-04-02 03:00:10 -07:00
scripts Move gemma4 script (#4994) 2026-04-12 23:41:15 -07:00
studio Re-apply #4939: updated models template mappers (#4950) 2026-04-15 07:52:12 -07:00
tests feat: Add cactus QAT scheme support (#4679) 2026-04-15 07:40:03 -07:00
unsloth Re-apply #4939: updated models template mappers (#4950) 2026-04-15 07:52:12 -07:00
unsloth_cli studio: stream export worker output into the export dialog (#4897) 2026-04-14 08:55:43 -07:00
.gitattributes EOL LF (unix line endings) normalization (#3478) 2025-10-17 16:22:42 -07:00
.gitignore Improve AI Assist: Update default model, model output parsing, logging, and dataset mapping UX (#4323) 2026-03-16 16:04:35 +04:00
.pre-commit-ci.yaml pre-commit CI config (#3565) 2025-11-07 14:44:18 -08:00
.pre-commit-config.yaml [pre-commit.ci] pre-commit autoupdate (#5004) 2026-04-14 09:49:18 -07:00
build.sh perf(studio): upgrade to Vite 8 + auto-install bun for faster frontend builds (#4522) 2026-03-25 04:27:41 -07:00
cli.py Rename cli/ to unsloth_cli/ to fix namespace collision with stringzilla (#4393) 2026-03-17 20:40:21 -07:00
CODE_OF_CONDUCT.md Update CODE_OF_CONDUCT.md 2025-10-25 19:31:05 -07:00
CONTRIBUTING.md Revert "Improve documentation on how to export model from Colab" 2026-03-13 22:38:41 -07:00
COPYING Rename cli/ to unsloth_cli/ to fix namespace collision with stringzilla (#4393) 2026-03-17 20:40:21 -07:00
install.ps1 Add configurable PyTorch mirror via UNSLOTH_PYTORCH_MIRROR env var (#5024) 2026-04-15 11:39:11 +04:00
install.sh Add configurable PyTorch mirror via UNSLOTH_PYTORCH_MIRROR env var (#5024) 2026-04-15 11:39:11 +04:00
LICENSE Rename cli/ to unsloth_cli/ to fix namespace collision with stringzilla (#4393) 2026-03-17 20:40:21 -07:00
pyproject.toml Update 2026-04-06 09:20:17 -07:00
README.md Gemma 4 update.md 2026-04-02 22:54:03 -07:00
unsloth-cli.py Merge pull request #3612 from Vangmay/feature/raw-text-dataprep 2026-01-08 03:38:15 -08:00

Unsloth logo

Run and train AI models with a unified local interface.

FeaturesQuickstartNotebooksDocumentationReddit

unsloth studio ui homepage

Unsloth Studio (Beta) lets you run and train text, audio, embedding, vision models on Windows, Linux and macOS.

Features

Unsloth provides several key features for both inference and training:

Inference

Training

  • Train and RL 500+ models up to 2x faster with up to 70% less VRAM, with no accuracy loss.
  • Custom Triton and mathematical kernels. See some collabs we did with PyTorch and Hugging Face.
  • Data Recipes: Auto-create datasets from PDF, CSV, DOCX etc. Edit data in a visual-node workflow.
  • Reinforcement Learning (RL): The most efficient RL library, using 80% less VRAM for GRPO, FP8 etc.
  • Supports full fine-tuning, RL, pretraining, 4-bit, 16-bit and, FP8 training.
  • Observability: Monitor training live, track loss and GPU usage and customize graphs.
  • Multi-GPU training is supported, with major improvements coming soon.

Quickstart

Unsloth can be used in two ways: through Unsloth Studio, the web UI, or through Unsloth Core, the code-based version. Each has different requirements.

Unsloth Studio (web UI)

Unsloth Studio (Beta) works on Windows, Linux, WSL and macOS.

  • CPU: Supported for Chat and Data Recipes currently
  • NVIDIA: Training works on RTX 30/40/50, Blackwell, DGX Spark, Station and more
  • macOS: Currently supports chat and Data Recipes. MLX training is coming very soon
  • AMD: Chat + Data works. Train with Unsloth Core. Studio support is out soon.
  • Coming soon: Training support for Apple MLX, AMD, and Intel.
  • Multi-GPU: Available now, with a major upgrade on the way

macOS, Linux, WSL:

curl -fsSL https://unsloth.ai/install.sh | sh

Windows:

irm https://unsloth.ai/install.ps1 | iex

Launch

unsloth studio -H 0.0.0.0 -p 8888

Update

To update, use the same install commands as above. Or run (does not work on Windows):

unsloth studio update

Docker

Use our Docker image unsloth/unsloth container. Run:

docker run -d -e JUPYTER_PASSWORD="mypassword" \
  -p 8888:8888 -p 8000:8000 -p 2222:22 \
  -v $(pwd)/work:/workspace/work \
  --gpus all \
  unsloth/unsloth

Developer, Nightly, Uninstall

To see developer, nightly and uninstallation etc. instructions, see advanced installation.

Unsloth Core (code-based)

Linux, WSL:

curl -LsSf https://astral.sh/uv/install.sh | sh
uv venv unsloth_env --python 3.13
source unsloth_env/bin/activate
uv pip install unsloth --torch-backend=auto

Windows:

winget install -e --id Python.Python.3.13
winget install --id=astral-sh.uv  -e
uv venv unsloth_env --python 3.13
.\unsloth_env\Scripts\activate
uv pip install unsloth --torch-backend=auto

For Windows, pip install unsloth works only if you have PyTorch installed. Read our Windows Guide. You can use the same Docker image as Unsloth Studio.

AMD, Intel:

For RTX 50x, B200, 6000 GPUs: uv pip install unsloth --torch-backend=auto. Read our guides for: Blackwell and DGX Spark.
To install Unsloth on AMD and Intel GPUs, follow our AMD Guide and Intel Guide.

📒 Free Notebooks

Train for free with our notebooks. You can use our new free Unsloth Studio notebook to run and train models for free in a web UI. Read our guide. Add dataset, run, then deploy your trained model.

Model Free Notebooks Performance Memory use
Gemma 4 (E2B) ▶️ Start for free 1.5x faster 50% less
Qwen3.5 (4B) ▶️ Start for free 1.5x faster 60% less
gpt-oss (20B) ▶️ Start for free 2x faster 70% less
Qwen3.5 GSPO ▶️ Start for free 2x faster 70% less
gpt-oss (20B): GRPO ▶️ Start for free 2x faster 80% less
Qwen3: Advanced GRPO ▶️ Start for free 2x faster 70% less
embeddinggemma (300M) ▶️ Start for free 2x faster 20% less
Mistral Ministral 3 (3B) ▶️ Start for free 1.5x faster 60% less
Llama 3.1 (8B) Alpaca ▶️ Start for free 2x faster 70% less
Llama 3.2 Conversational ▶️ Start for free 2x faster 70% less
Orpheus-TTS (3B) ▶️ Start for free 1.5x faster 50% less

🦥 Unsloth News

  • Gemma 4: Run and train Googles new models directly in Unsloth Studio! Blog
  • Introducing Unsloth Studio: our new web UI for running and training LLMs. Blog
  • Qwen3.5 - 0.8B, 2B, 4B, 9B, 27B, 35-A3B, 112B-A10B are now supported. Guide + notebooks
  • Train MoE LLMs 12x faster with 35% less VRAM - DeepSeek, GLM, Qwen and gpt-oss. Blog
  • Embedding models: Unsloth now supports ~1.8-3.3x faster embedding fine-tuning. BlogNotebooks
  • New 7x longer context RL vs. all other setups, via our new batching algorithms. Blog
  • New RoPE & MLP Triton Kernels & Padding Free + Packing: 3x faster training & 30% less VRAM. Blog
  • 500K Context: Training a 20B model with >500K context is now possible on an 80GB GPU. Blog
  • FP8 & Vision RL: You can now do FP8 & VLM GRPO on consumer GPUs. FP8 BlogVision RL
  • gpt-oss by OpenAI: Read our RL blog, Flex Attention blog and Guide.

📥 Advanced Installation

The below advanced instructions are for Unsloth Studio. For Unsloth Core advanced installation, view our docs.

Developer installs: macOS, Linux, WSL:

git clone https://github.com/unslothai/unsloth
cd unsloth
./install.sh --local
unsloth studio -H 0.0.0.0 -p 8888

Then to update :

unsloth studio update

Developer installs: Windows PowerShell:

git clone https://github.com/unslothai/unsloth.git
cd unsloth
Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
.\install.ps1 --local
unsloth studio -H 0.0.0.0 -p 8888

Then to update :

unsloth studio update

Nightly: MacOS, Linux, WSL:

git clone https://github.com/unslothai/unsloth
cd unsloth
git checkout nightly
./install.sh --local
unsloth studio -H 0.0.0.0 -p 8888

Then to launch every time:

unsloth studio -H 0.0.0.0 -p 8888

Nightly: Windows:

Run in Windows Powershell:

git clone https://github.com/unslothai/unsloth.git
cd unsloth
git checkout nightly
Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
.\install.ps1 --local
unsloth studio -H 0.0.0.0 -p 8888

Then to launch every time:

unsloth studio -H 0.0.0.0 -p 8888

Uninstall

You can uninstall Unsloth Studio by deleting its install folder usually located under $HOME/.unsloth/studio on Mac/Linux/WSL and %USERPROFILE%\.unsloth\studio on Windows. Using the rm -rf commands will delete everything, including your history, cache:

  • MacOS, WSL, Linux: rm -rf ~/.unsloth/studio
  • Windows (PowerShell): Remove-Item -Recurse -Force "$HOME\.unsloth\studio"

For more info, see our docs.

Deleting model files

You can delete old model files either from the bin icon in model search or by removing the relevant cached model folder from the default Hugging Face cache directory. By default, HF uses:

  • MacOS, Linux, WSL: ~/.cache/huggingface/hub/
  • Windows: %USERPROFILE%\.cache\huggingface\hub\
Type Links
  Discord Join Discord server
  r/unsloth Reddit Join Reddit community
📚 Documentation & Wiki Read Our Docs
  Twitter (aka X) Follow us on X
🔮 Our Models Unsloth Catalog
✍️ Blog Read our Blogs

Citation

You can cite the Unsloth repo as follows:

@software{unsloth,
  author = {Daniel Han, Michael Han and Unsloth team},
  title = {Unsloth},
  url = {https://github.com/unslothai/unsloth},
  year = {2023}
}

If you trained a model with 🦥Unsloth, you can use this cool sticker!  

License

Unsloth uses a dual-licensing model of Apache 2.0 and AGPL-3.0. The core Unsloth package remains licensed under Apache 2.0, while certain optional components, such as the Unsloth Studio UI are licensed under the open-source license AGPL-3.0.

This structure helps support ongoing Unsloth development while keeping the project open source and enabling the broader ecosystem to continue growing.

Thank You to

  • The llama.cpp library that lets users run and save models with Unsloth
  • The Hugging Face team and their libraries: transformers and TRL
  • The Pytorch and Torch AO team for their contributions
  • NVIDIA for their NeMo DataDesigner library and their contributions
  • And of course for every single person who has contributed or has used Unsloth!