mirror of
https://github.com/unslothai/unsloth
synced 2026-04-21 13:37:39 +00:00
27 commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
3869fbe1cc
|
Bump installer minimum to 2026.4.5 (#5041) | ||
|
|
13928b5f0e
|
Add configurable PyTorch mirror via UNSLOTH_PYTORCH_MIRROR env var (#5024)
* Add configurable PyTorch mirror via UNSLOTH_PYTORCH_MIRROR env var When set, UNSLOTH_PYTORCH_MIRROR overrides the default https://download.pytorch.org/whl base URL in all four install scripts (install.sh, install.ps1, studio/setup.ps1, studio/install_python_stack.py). When unset or empty, the official URL is used. This lets users behind corporate proxies or in regions with poor connectivity to pytorch.org point at a local mirror without patching scripts. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add pytest for UNSLOTH_PYTORCH_MIRROR in install_python_stack.py Tests that _PYTORCH_WHL_BASE picks up the env var when set, falls back to the official URL when unset or empty, and preserves the value as-is (including trailing slashes). * Remove stale test assertions for missing install.sh messages * Fix GPU mocking in test_get_torch_index_url.sh Extract _has_usable_nvidia_gpu and _has_amd_rocm_gpu alongside get_torch_index_url so the GPU-presence checks work in tests. Add -L flag handling to mock nvidia-smi so it passes the GPU listing check. All 26 tests now pass on CPU-only machines. * Strip trailing slash from UNSLOTH_PYTORCH_MIRROR to avoid double-slash URLs --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> |
||
|
|
5b8dbdc3c2
|
Fix bitsandbytes ROCm install by using pip instead of uv (#4966)
* Fix bitsandbytes ROCm install by using pip instead of uv * Also use pip for PyPI fallback path in _install_bnb_rocm The original fix correctly switched the pre-release wheel install from uv to pip, but left the PyPI fallback path on uv. If uv breaks bnb on ROCm, the fallback would hit the same issue. Move pip bootstrap before the branch so both paths use pip consistently. * Harden pip bootstrap: try ensurepip first, warn on failure - Try ensurepip --upgrade before falling back to uv pip install pip. ensurepip works offline and does not need PyPI, making the bootstrap robust when the network or index is unavailable. - If both ensurepip and uv fail, emit a visible warning instead of silently swallowing the error (which previously led to a cryptic "No module named pip" downstream). - Use run_maybe_quiet so --verbose users see bootstrap output. - Update comment to document the actual root cause: uv rejects the wheel because filename version and metadata version disagree. * Add --isolated to pip install calls in _install_bnb_rocm uv pip install ignores pip.conf and PIP_* env vars, but python -m pip reads them. Without --isolated, users with PIP_INDEX_URL pointing to a private mirror that does not carry bitsandbytes would see the PyPI fallback fail where it previously worked under uv. --isolated restores parity with the old uv behavior. * Drop --isolated from PyPI fallback in _install_bnb_rocm --isolated suppresses PIP_INDEX_URL, PIP_EXTRA_INDEX_URL, and pip.conf. This is correct for the pre-release path (hardcoded GitHub URL, no index consulted), but breaks the PyPI fallback for users in corporate or air-gapped environments whose only route to bitsandbytes is a private mirror configured via those mechanisms. Keep --isolated on the direct-URL pre-release install; drop it from the index-dependent fallback. * Drop --isolated from pre-release pip install, fix warning wording --isolated suppresses pip.conf cert/proxy/CA settings in addition to index config. For the direct GitHub URL, index config is irrelevant but cert/proxy settings matter in corporate SSL-inspection environments. Without this fix, users with pip.conf-based CA bundles get a TLS error on the pre-release download and silently fall back to the broken PyPI version -- the exact outcome the PR is trying to prevent. Also fix the fallback warning: "unreachable" is too specific since the pre-release install can fail for reasons other than network reachability. --------- Co-authored-by: Daniel Han <danielhanchen@gmail.com> |
||
|
|
65b4028560
|
Pin bitsandbytes to continuous-release_main on ROCm (4-bit decode fix) (#4954)
* Pin bitsandbytes to continuous-release_main on ROCm for 4-bit decode fix
bitsandbytes 0.49.2 on PyPI ships with a broken 4-bit GEMV kernel on
every ROCm target:
- CDNA (gfx90a / gfx942 / gfx950 = MI210 / MI300X / MI350) via a
broken blocksize=32/64 warp64 GEMV kernel whose tests were
explicitly skipped with ROCM_WARP_SIZE_64 guards because the
code was known broken.
- RDNA3 / RDNA3.5 (gfx1100-1103 / gfx1150-1152) via a compile-time
BNB_WARP_SIZE macro in the host-side dispatch that resolves to
64 when the multi-arch wheel is compiled with CDNA as the
primary target, so num_blocks is wrong on RDNA and half the GEMV
output is never written.
At decode shape (1, 1, hidden) both bugs produce NaN. Training is
unaffected because training shapes are (batch, seq_len > 1, hidden)
and never touch the GEMV path. The crash during autoregressive
inference surfaces as _assert_async_cuda_kernel in torch.multinomial
which on HIP becomes a hard HSA_STATUS_ERROR_EXCEPTION instead of
a clean Python error.
Both bugs are fixed by bitsandbytes commit 713a3b8 ("[ROCm] Enable
blocksize 32 4-bit quantization and GEMV kernels on AMD CDNA",
PR #1887, merged 2026-03-09) which replaces BNB_WARP_SIZE with a
runtime hipDeviceGetAttribute query and ships a working CDNA warp64
kernel. That commit has not shipped to PyPI yet, but
continuous-release_main wheels are published on every push to bnb
main via GitHub Releases.
Point the ROCm install path at the continuous-release_main x86_64 and
aarch64 wheels and fall back to PyPI >=0.49.1 when the pre-release is
unreachable (offline installs, firewalled hosts, or architectures not
covered by the pre-release wheels). Drop the pin once bnb cuts a
0.50+ tag on PyPI.
Verified on MI300X (gfx942, ROCm 7.2, torch 2.10.0+rocm7.1): direct
bnb GEMV shape test now returns 0.0078 max abs error at seq_len=1
(no NaN) vs NaN on 0.49.2, and full Unsloth + for_inference + 4-bit
sampling generation works end-to-end.
NVIDIA / CPU / Mac / Windows paths are unaffected -- the helper is
gated on the ROCm torch index and platform.machine() respectively.
* Drop Studio ROCm 16-bit fallback now that bnb 0.50+ fixes 4-bit decode
The 16-bit fallback in studio/backend/core/inference/inference.py was
added as a workaround for a bug that this PR already fixes at the
install layer: bitsandbytes <= 0.49.2 has a broken 4-bit GEMV kernel
on every ROCm target, which NaNs at decode shape (seq_len=1) and
crashes autoregressive inference. bnb PR #1887 (commit 713a3b8, in
0.50.0.dev0+, pinned by install.sh / install_python_stack.py in this
PR) restores correct 4-bit decode on MI300X and verified working
end-to-end with full Unsloth + for_inference + sampling.
Revert the dual code path so ROCm and NVIDIA both go through the
normal FastLanguageModel.from_pretrained + for_inference flow:
- Remove the conditional `from unsloth import` that skipped the
import on ROCm. The monkey-patches it was trying to avoid were
never the cause of the crash; bnb 4-bit GEMV was.
- Remove the `if _hw_module.IS_ROCM:` branch in load_model that
loaded with plain transformers + PEFT + bfloat16, and the
`_resolve_fp16_base` helper it relied on.
- Remove the `get_chat_template is not None` fallback in
_load_chat_template_info -- get_chat_template is now always
imported.
- Refactor the audio/vision ROCm guard to check _hw_module.IS_ROCM
directly instead of the removed _IS_ROCM_ENV global. Audio and
vision on ROCm still need separate validation (FastVisionModel
and the CSM audio codecs were never tested on HIP) so the guard
stays for now.
Add _bnb_rocm_4bit_ok() as a runtime safety net for users who
install from this PR before the install.sh bnb pin kicks in, or
whose installer fell back to the PyPI pin because the continuous-
release wheel was unreachable. When the installed bnb is < 0.50 on
ROCm, force load_in_4bit=False and strip any -unsloth-bnb-4bit /
-bnb-4bit suffix from the model path so a pre-quantized repo
resolves to its FP16 sibling instead of pulling bnb back in via
the repo's quantization_config. LoRA adapters whose base is a
pre-quantized repo on old bnb will still fail inside Unsloth's
loader -- the only real fix there is `unsloth studio update`.
Verified on MI300X (gfx942, ROCm 7.2, torch 2.10.0+rocm7.1):
- HAPPY path (bnb 0.50.0.dev0, load_in_4bit=True, pre-quantized
repo): loads in 4-bit via the fixed GEMV, generation returns
"Paris." for greedy and sampling.
- SAFETY-NET path (simulated old bnb, suffix-stripped to the
FP16 sibling, load_in_4bit=False): loads in bf16, generation
returns "Paris." for greedy and sampling.
Net diff is ~45 lines smaller than the pre-revert state because
the entire plain-transformers 16-bit branch is gone.
* Cache _bnb_rocm_4bit_ok() with functools.cache
load_model() can be called many times in a single session but the bnb
version and hardware state cannot change at runtime, so memoise the
check. First call is ~1.9 ms (dominated by the lazy `import bitsandbytes`
inside the try block), subsequent calls drop to sub-microsecond dict
lookups. Zero behavioral change.
* Shorten verbose bnb/ROCm comments
Comment-only cleanup across install.sh, studio/install_python_stack.py,
and studio/backend/core/inference/inference.py. No behavioral change.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Remove _bnb_rocm_4bit_ok safety net from inference.py
Studio's ROCm support is brand new (PR #4720, merged today) and every
fresh install pulls the bnb continuous-release_main wheel via
install.sh / install_python_stack.py in this same PR. There are no
existing ROCm Studio installs carrying bnb < 0.50, so the defensive
version-check fallback is guarding against a scenario that cannot
actually occur. Delete the helper, the functools import, and the
safety-net block -- inference.py now calls FastLanguageModel.from_pretrained
directly with no ROCm branching.
* Drop audio/vision ROCm guard in inference.py — verified unblocked by bnb fix
Vision inference was blocked by the same bnb 4-bit GEMV bug that affected
text inference (vision models use bnb 4-bit for the LM backbone). With
bnb 0.50+ pinned in install.sh / install_python_stack.py, vision works
end-to-end on MI300X: Llama-3.2-11B-Vision-Instruct-unsloth-bnb-4bit
loaded in 4-bit via FastVisionModel + for_inference returns a correct
answer to a multimodal prompt.
Audio (CSM) was never actually blocked by HIP — on this hardware CSM
loads and runs its backbone forward pass fine with bnb 0.50, then fails
during generate() with a transformers-level kwarg validation mismatch
in generation_csm.py (`backbone_last_hidden_state` rejected). That's a
pre-existing transformers/CSM integration bug that reproduces identically
on NVIDIA, so the ROCm-gated guard was never actually protecting users
from anything HIP-specific.
Remove the combined audio/vision guard and the now-unused _hw_module
import. Also restore the one-word "Can be" in an inline comment that
drifted during the earlier comment-shortening pass, so the inference.py
delta vs pre-#4720 is exactly the max_seq_length<=0 crash fix and
nothing else.
* Shorten max_seq_length=0 guard comment to one line
---------
Co-authored-by: Daniel Han <danielhanchen@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
|
||
|
|
cad8c6ad05
|
Add AMD ROCm/HIP support across installer and hardware detection (#4720)
* Add ROCm detection to install.sh and expand shell tests Add AMD ROCm GPU detection to get_torch_index_url() in install.sh. When nvidia-smi is not found, probe for ROCm via amd-smi, /opt/rocm version file, hipconfig, dpkg-query, and rpm. Includes validation guard for malformed _rocm_tag, Debian epoch prefix stripping, ROCm 7.2+ cap to rocm7.1 index, bitsandbytes AMD install, and status messaging. Shell tests expanded to 23 cases. Co-authored-by: Daniel Han <danielhanchen@gmail.com> * Add ROCm torch reinstall support to install_python_stack.py Add _detect_rocm_version() and _ensure_rocm_torch() to detect when a Linux host has ROCm but the venv received CPU-only torch, and reinstall with the correct ROCm wheels. Covers ROCm 6.0 through 7.1 with a 30-second timeout on the torch GPU probe subprocess. Co-authored-by: Daniel Han <danielhanchen@gmail.com> * Add ROCm support to llama.cpp prebuilt installer Add has_rocm field to HostInfo, extend detect_host() to probe for ROCm via hipcc/amd-smi/rocm-smi/ROCM_PATH, and route ROCm hosts to upstream prebuilts (Linux ROCm 7.2 prebuilt with source fallback, Windows HIP prebuilt with CPU fallback). Add linux-rocm and windows-hip install kinds to runtime_patterns_for_choice(). Co-authored-by: Daniel Han <danielhanchen@gmail.com> * Add IS_ROCM hardware flag and fix AMD error message Add IS_ROCM flag to hardware.py detect_hardware() (set when torch.version.hip is present, DeviceType stays CUDA). Export IS_ROCM from __init__.py. Add "rocm" key to get_package_versions(). Replace "We do not support AMD" error in tokenizer_utils.py with a helpful message pointing to ROCm installation docs. Co-authored-by: Daniel Han <danielhanchen@gmail.com> * Add comprehensive ROCm support test suite (68 tests) Add tests/studio/install/test_rocm_support.py covering all ROCm code paths across install_llama_prebuilt.py, install_python_stack.py, hardware.py, tokenizer_utils.py, and install.sh. All tests use mocks and run without AMD hardware. Covers: asset selection (11), runtime patterns (5), HostInfo (4), ROCm version detection (9), torch reinstall (9), index mapping (8), hardware flag (8), tokenizer message (2), install.sh structure (10), and live regression (1). * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Harden ROCm support: probe error handling, version cap, validation Address review findings from 8 independent reviewers: - Wrap _ensure_rocm_torch() torch probe in try/except for TimeoutExpired and OSError so a hung or broken torch import does not crash the installer (8/8 reviewers flagged this) - Add torch>=2.4,<2.11.0 version cap to the ROCm reinstall path to prevent installing unsupported torch 2.11.0 from the rocm7.1 index - Use with-statement for file reads in _detect_rocm_version() to avoid resource leaks - Handle ROCM_PATH="" correctly (use `or "/opt/rocm"` instead of default parameter to avoid relative path resolution) - Strengthen shell validation guard from rocm[0-9] to rocm[1-9] to reject rocm0.x tags that would produce nonexistent PyTorch index URLs - Switch shell version cap from blocklist to allowlist (rocm6.*|rocm7.0* |rocm7.1* pass through, everything else caps to rocm7.1) so future ROCm 10+ does not fall through to a nonexistent index - Add sorted() to _ROCM_TORCH_INDEX lookup for defensive ordering - Fix test_probe_timeout_handled: replace zero-assertion test with proper assertions verifying reinstall proceeds after timeout * Clean up rocm_paths list construction in detect_host() Filter None from the ROCM_PATH env var lookup at list construction time instead of relying on the inline `if p` guard in the any() call. * Require actual AMD GPU presence before selecting ROCm paths All 8 reviewers across 2 cycles independently flagged that ROCm detection used toolkit/filesystem hints (hipcc, /opt/rocm, rocm-core) as a proxy for GPU presence, which would misroute CPU-only or NVIDIA hosts that happen to have ROCm tools installed. Now all 3 detection points (install.sh, install_python_stack.py, install_llama_prebuilt.py) probe for an actual AMD GPU before entering the ROCm path: - install.sh: check rocminfo for gfx* GPU names, or amd-smi list for device rows, before version detection - install_python_stack.py: new _has_rocm_gpu() function probes rocminfo and amd-smi list before _ensure_rocm_torch() proceeds - install_llama_prebuilt.py: detect_host() probes rocminfo/amd-smi list instead of just checking tool existence or directory paths Also: - Shell test mock amd-smi now handles "list" subcommand - Python tests updated to mock _has_rocm_gpu where needed - Added test_no_gpu_with_rocm_tools_skips to verify the new guard - Test index lookups now use sorted() to match production code * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Harden hipconfig version parsing and torch probe compatibility - Add parts[1].isdigit() check in hipconfig version parsing to handle versions like "6.3-HIP" where the minor component has non-numeric suffix (strip "-" prefix before int() conversion) - Use getattr() in torch probe subprocess to safely handle old or custom torch builds that may lack torch.version.hip/cuda attributes * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Strengthen AMD GPU detection and add NVIDIA precedence guard - Change amd-smi list detection from any-non-empty-output to requiring "gpu" marker in output, matching the shell-side NR>1 check. Prevents false positives from header-only amd-smi list output. - Add nvidia-smi check at the top of _ensure_rocm_torch() so mixed AMD+NVIDIA hosts preserve NVIDIA precedence (matching install.sh and install_llama_prebuilt.py behavior). - Apply the same amd-smi marker fix to install_llama_prebuilt.py detect_host() for consistency. * Add Windows-specific ROCm/HIP detection in detect_host() The previous detect_host() ROCm check used rocminfo and amd-smi list which are Linux-only tools. On Windows, has_rocm would always be False, making the Windows HIP prebuilt path at line 1794 unreachable. Now detect_host() uses platform-specific detection: - Linux: rocminfo (check for gfx GPU names) or amd-smi list - Windows: hipinfo.exe, amd-smi, or amdhip64.dll on PATH This allows Windows AMD users to get the HIP prebuilt binary instead of silently falling through to the CPU prebuilt. * Add AMD ROCm gaps: Mamba/SSM source builds, GPU monitoring, Windows messaging, RDNA expansion - worker.py: Add HIP detection to causal-conv1d/mamba-ssm probe, check for hipcc before ROCm source builds, improve status messages and error reporting, add timeout and uv support for the source build fallback - amd.py: New AMD GPU monitoring module via amd-smi metric --json, mirroring nvidia.py structure (utilization, temperature, power, VRAM) - hardware.py: Branch to amd.py when IS_ROCM is True for GPU utilization, visible GPU queries, and physical GPU count - install_python_stack.py: Detect AMD GPUs on Windows and warn that ROCm-enabled PyTorch must be installed manually - kernels/utils.py: Expand is_rdna() to cover RDNA2 (gfx1030-1032), RDNA3 (gfx1102-1103), RDNA3.5 (gfx1150-1152) alongside existing entries - tests: Add 32 new tests covering all changes (95/95 pass) * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Harden ROCm detection, fix VRAM heuristic, and expand RDNA2 coverage - Windows ROCm detection: validate actual GPU presence via hipinfo/amd-smi output markers instead of just checking tool existence on PATH - _ensure_rocm_torch: validate nvidia-smi actually reports a GPU before giving NVIDIA precedence (fixes AMD-only hosts with stale NVIDIA tools) - amd.py _parse_numeric: handle dict-shaped metric objects from newer amd-smi versions ({"value": 10, "unit": "W"}) and strip MiB/GiB units - amd.py VRAM heuristic: raise threshold from 100k to 10M to correctly handle MI300X (192 GB = 196608 MB) and other high-VRAM GPUs - amd.py visible GPU: use AMD-reported GPU IDs instead of enumerate index so non-dense sets like CUDA_VISIBLE_DEVICES=1,3 report correctly - install.sh: add ROCm <6.0 minimum version guard (no PyTorch wheels exist for older versions); fix rocm7.1* glob to not match rocm7.10+ - is_rdna: add gfx1033-1036 for RDNA2 mobile GPUs (RX 6600M etc.) - worker.py: increase ROCm source build timeout from 600s to 1800s; fix success log message for ROCm source builds - Tests: update mocks for _has_usable_nvidia_gpu, add RDNA2 target asserts * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add HIP_VISIBLE_DEVICES support, unit-aware VRAM parsing, Windows GPU validation - hardware.py: check HIP_VISIBLE_DEVICES and ROCR_VISIBLE_DEVICES on ROCm before falling back to CUDA_VISIBLE_DEVICES, so multi-GPU AMD setups with HIP-specific env vars report the correct visible device set - amd.py: add _parse_memory_mb() that reads "unit" from dict-shaped amd-smi JSON (e.g. {"value": 192, "unit": "GiB"}) and converts to MB correctly; fixes MI300X VRAM misreported as 0.19 GB instead of 192 GB - install_python_stack.py: Windows AMD warning now validates actual GPU presence via hipinfo/amd-smi output markers before printing - install_llama_prebuilt.py: restore amdhip64.dll fallback for Windows HIP detection after tool-based checks, so Windows HIP installs without CLI tools on PATH are still detected - hardware.py: fix IS_ROCM comment to accurately describe its role * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix HIP_VISIBLE_DEVICES empty-string handling in GPU visibility spec Use explicit None checks instead of Python `or` operator when reading HIP_VISIBLE_DEVICES / ROCR_VISIBLE_DEVICES, so that an empty string ("") is correctly honored as "no visible GPUs" rather than silently falling through to CUDA_VISIBLE_DEVICES on mixed ROCm+CUDA systems. * Fix IS_ROCM test assertion for multi-line formatting * Cap torchvision/torchaudio versions, remove amdhip64.dll fallback, fix visible GPU count - Cap torchvision<0.26.0 and torchaudio<2.11.0 alongside torch<2.11.0 in both install.sh and install_python_stack.py to prevent resolver from selecting incompatible companion packages from ROCm wheel index - Remove amdhip64.dll fallback in Windows ROCm detection (DLL presence without hipinfo/amd-smi is not proof of GPU existence) - Fix get_visible_gpu_count() to use _get_parent_visible_gpu_spec() which respects HIP_VISIBLE_DEVICES/ROCR_VISIBLE_DEVICES on ROCm hosts * Attribute is_rdna() RDNA2/3/3.5/4 expansion to PR #4428 The is_rdna() expansion to cover RDNA2 (gfx1030-1036), RDNA3 (gfx1100-1103), RDNA3.5 (gfx1150-1152), and RDNA4 (gfx1200-1201) architectures is based on the original work from PR #4428. Co-authored-by: GoldenGrapeGentleman <yueyuan@amd.com> Co-authored-by: billishyahao <bill.he@amd.com> * Support AMD Radeon for studio (#4770) Co-authored-by: Iswarya Alex <iswarya.alex@amd.com> * Remove ROCm test files from main PR Move test_rocm_support.py and shell test additions to a separate PR to keep the main ROCm support PR focused on implementation changes. * Fix installer and hardware detection issues for PR #4720 - Fix empty _tri_arg passed to uv pip install in Radeon path (causes "Empty field is not allowed for PEP508" error) - Fix Radeon fallback: use ROCm index instead of CPU-only when repo.radeon.com is unreachable (TORCH_INDEX_URL already has ROCm) - Use $TORCH_CONSTRAINT in fallback paths instead of hardcoded strings - Fix _pick_radeon_wheel: relax suffix to match manylinux_2_28_x86_64 wheels (AMD Radeon repo does not use bare linux_x86_64 platform tag) - Fix IS_ROCM export: use __getattr__ so callers always see the live value after detect_hardware() runs - Fix apply_gpu_ids: set HIP_VISIBLE_DEVICES and ROCR_VISIBLE_DEVICES on ROCm so _get_parent_visible_gpu_spec picks up narrowed GPU set - Fix _parse_memory_mb: distinguish GB (1000 MB) from GiB (1024 MiB) - Add amd-smi version as a fallback in _detect_rocm_version - Fix trailing whitespace and missing newline at EOF in install.sh * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix GPU detection false positives and add missing health groups - Fix _has_rocm_gpu() false positive: require "GPU: <number>" data rows from amd-smi list, not just header containing "gpu" - Apply same fix in detect_host() in install_llama_prebuilt.py - Add runtime_payload_health_groups for linux-rocm and windows-hip so partial/corrupt ROCm/HIP prebuilt installs are properly detected - Add bitsandbytes install to Radeon fallback paths (was only in the success path, skipped when repo.radeon.com was unreachable) - Keep DEVICE/CHAT_ONLY as direct imports in __init__.py (matching main) and only use __getattr__ for IS_ROCM * Fix _ensure_rocm_torch and Windows AMD warning false positives - _ensure_rocm_torch: only skip when HIP is already present, not for CUDA builds (which are unusable on AMD-only hosts). Fixes the case where a venv has a stale CUDA wheel and the repair step is skipped. - Windows AMD warning: use GPU data row check (same as Linux fix) to avoid false positives from amd-smi list header-only output. * Fix amd-smi GPU detection for GPU[N] output format Older amd-smi versions output "GPU[0] : Card series: ..." instead of "GPU: 0". The regex now matches both "GPU: <digit>" and "GPU[<digit>" formats to detect actual GPU data rows. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Harden AMD GPU detection against false positives - install.sh: replace weak amd-smi list check (awk 'NR>1 && NF') with strict pattern matching GPU data rows (/^GPU[[:space:]]*[:\[]/) - All files: reject rocminfo gfx000 (CPU HSA agent) by requiring gfx[1-9] instead of gfx[0-9] in the rocminfo GPU probe - Fixes false positives on hosts with ROCm tools but no AMD GPU * Remove duplicate comment from pre-commit merge * Refactor: deduplicate AMD detection, consolidate bitsandbytes, clean up imports - Extract _has_amd_rocm_gpu() shell function to avoid duplicating the rocminfo/amd-smi GPU detection logic in get_torch_index_url and the Radeon auto-detect block - Consolidate bitsandbytes install into a single case block after torch install (was duplicated 4 times across Radeon success/fallback paths) - Move math and re imports to top of amd.py (were inline in functions) - Add _smi_query() helper in hardware.py to centralize IS_ROCM backend selection for get_gpu_utilization and get_visible_gpu_utilization Addresses Gemini code review suggestions. * Fix VRAM parsing for string values and GB/GiB consistency - Extract unit from string-valued VRAM fields (e.g. "192 GiB") so _parse_memory_mb correctly applies the unit multiplier instead of treating the value as bare MB - Treat GB and GiB identically (both as binary x1024) since GPU tools including amd-smi use binary units even when labeling them "GB" - Fixes incorrect VRAM reporting on MI300-class cards (was showing ~0.19 GB instead of 192 GB for string-valued outputs) * Add --no-cache to uv for ROCm HIP source builds Avoid stale cache artifacts from partial HIP source builds when uv is used for causal-conv1d/mamba-ssm compilation on ROCm. The pip path already uses --no-cache-dir; this adds the uv equivalent (--no-cache) only when is_hip is True. * Fix critical: initialize _amd_gpu_radeon before case block _amd_gpu_radeon was only set inside the */rocm*) case arm, so on NVIDIA/CPU/macOS paths where TORCH_INDEX_URL does not contain "rocm", the variable was unbound. With set -u (nounset) enabled, this crashes the installer for every non-AMD user. Move initialization to before the case block so it is always defined. * Fix Windows AMD: route has_rocm hosts to HIP prebuilt path resolve_release_asset_choice was selecting windows-cpu for all Windows x86_64 hosts including those with has_rocm=True. Windows AMD users should fall through to resolve_upstream_asset_choice which tries the HIP prebuilt first. Add "not host.has_rocm" guard to the published windows-cpu selection. * Harden ROCm detection, Radeon wheel fallback, and HIP visibility Addresses review findings from parallel reviewers on PR #4720: - install.sh: add _has_usable_nvidia_gpu() helper requiring nvidia-smi -L to actually list a GPU before treating the host as NVIDIA. Fixes the stale-nvidia-smi-on-PATH regression where AMD-only hosts fell into the CUDA branch. - install.sh: fix hipconfig awk blocks to propagate a non-zero exit code when the output is not a recognisable version string, so the ||-chain continues to dpkg-query / rpm instead of terminating early. - install.sh: fail-closed on Radeon wheel fallback. When torch, torchvision or torchaudio is missing from the Radeon repo for the active Python tag, fall back to the standard ROCm index instead of silently mixing Radeon wheels with PyPI defaults. Quote all wheel arguments individually so wheel filenames cannot be word-split or glob-expanded. - install_llama_prebuilt.py: detect_host() now requires nvidia-smi -L to list a GPU before setting has_physical_nvidia. Routes AMD ROCm hosts with a broken leftover nvidia-smi to the ROCm path instead of misclassifying them as NVIDIA. - install_llama_prebuilt.py: scan upstream assets for any rocm-<version> prebuilt instead of hard-coding rocm-7.2, so ROCm 6.x / 7.0 / 7.1 / 7.3+ users pick up a matching upstream prebuilt when one exists. - install_llama_prebuilt.py: validate_server() adds --n-gpu-layers 1 for linux-rocm and windows-hip hosts, so new HIP prebuilts are preflighted on the GPU path instead of passing validation on CPU only. - install_llama_prebuilt.py: restore the published windows-cpu fallback for AMD Windows hosts without a HIP prebuilt so hash-approved bundles are still preferred over the raw upstream CPU asset. - install_python_stack.py: drop the /opt/rocm / hipcc gate in _ensure_rocm_torch() and rely on _has_rocm_gpu(). Runtime-only ROCm installs (package-managed minimal installs, Radeon software) that ship amd-smi / rocminfo without hipcc can now repair a CPU-only venv via "unsloth studio update". Adds an explicit IS_WINDOWS / IS_MACOS guard. - studio/backend/utils/hardware/amd.py: honour HIP_VISIBLE_DEVICES / ROCR_VISIBLE_DEVICES / CUDA_VISIBLE_DEVICES in get_primary_gpu_utilization(). A process restricted to GPU 2 now reports metrics for GPU 2 instead of physical GPU 0. Tighten the plain bytes unit detection to an explicit allowlist. - studio/backend/utils/hardware/hardware.py: route get_backend_visible_gpu_info()'s backend_cuda_visible_devices field through a helper that reads HIP_VISIBLE_DEVICES on ROCm. Drop the unconditional "(rocm=False)" suffix in apply_gpu_ids() logs. * Fix round 2 regressions: ROCm validate_server and Windows HIP routing Follow-up to |
||
|
|
1d8160376e
|
Bump minimum unsloth version to 2026.4.4 in install scripts (#4876) | ||
|
|
6100867447
|
Bump minimum unsloth version to 2026.4.2 in install scripts (#4842) | ||
|
|
d22b2a18f9
|
fix: add tokenizers to no-torch deps and TORCH_CONSTRAINT for arm64 macOS py313+ (#4748)
* fix: add tokenizers to no-torch runtime deps and add TORCH_CONSTRAINT for arm64 macOS py313+ Two installer fixes: 1. Add `tokenizers` to `no-torch-runtime.txt` before `transformers`. Without it, `from transformers import AutoConfig` crashes on startup because `--no-deps` skips transitive dependencies. 2. Add `TORCH_CONSTRAINT` variable to `install.sh`. On arm64 macOS with Python 3.13+, tighten the torch requirement to `>=2.6` since torch <2.6 has no cp313 arm64 wheels. The variable replaces the previously hard-coded constraint in the uv pip install line. Includes 66 tests (42 pytest + 24 bash) covering: - Structural checks on install.sh, install.ps1, no-torch-runtime.txt - Shell snippet tests with mocked python for 13 platform/version combos - Mock uv integration verifying correct constraint string - E2E venv tests on Python 3.12 and 3.13 confirming AutoConfig works - Negative control proving AutoConfig fails without tokenizers - Full no-torch sandbox regression guards (safetensors, huggingface_hub) * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix incomplete no-torch manifest and align E2E tests with real --no-deps path - Add missing transitive deps to no-torch-runtime.txt that are required under --no-deps: regex, typing_extensions, filelock, httpx, httpcore, certifi, idna, anyio, sniffio, h11. Without these, `from transformers import AutoConfig` still fails after install.sh --no-torch. - Change all E2E tests to use --no-deps (matching what install.sh does) instead of normal dep resolution. Previous tests passed even with an incomplete manifest because uv backfilled transitive deps. - Rewrite negative control to derive from the real no-torch-runtime.txt with tokenizers stripped, proving the specific fix matters. - Replace GNU-only sed -i with heredoc in shell test for macOS compat. - Remove unused os/sys imports from Python test file. - Quote SKIP_TORCH and mock uv paths in bash -c strings. * Assert install succeeds before checking import results in E2E tests Address review feedback: test_torch_not_importable and test_tokenizers_directly_importable in Group 3 now assert that uv pip install returns 0 before checking import behavior. This prevents false positives when the install itself fails silently. * Assert install succeeds in negative control and tighten error check - Add missing install-success assertion in test_negative_control_no_tokenizers to prevent false positives from network/install failures. - Tighten error message check to look for "tokenizers" in stderr or ModuleNotFoundError, rather than the generic "No module" substring which could match unrelated import failures. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: Daniel Han <danielhanchen@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> |
||
|
|
6984e118eb
|
Bump installer minimum version pin to 2026.3.18 (#4729)
Matches the latest PyPI release. |
||
|
|
5557e1fd27
|
studio: unify Windows installer/setup logging style, verbosity controls, and startup messaging (#4651)
* refactor(studio): unify setup terminal output style and add verbose setup mode * studio(windows): align setup.ps1 banner/steps with setup.sh (ANSI, verbose) * studio(setup): revert nvcc path reordering to match main * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * studio(setup): restore fail-fast llama.cpp setup flow * studio(banner): use IPv6 loopback URL when binding :: or ::1 * Fix IPv6 URL bracketing, try_quiet stderr, _step label clamp - Bracket IPv6 display_host in external_url to produce clickable URLs - Redirect try_quiet failure log to stderr instead of stdout - Clamp _step label to column width to prevent negative padding * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add sandbox integration tests for PR #4494 UX fixes Simulation harness (tests/simulate_pr4494.py) creates an isolated uv venv, copies the real source files into it, and runs subprocess tests for all three fixes with visual before/after demos and edge cases. Standalone bash test (tests/test_try_quiet.sh) validates try_quiet stderr redirect across 8 scenarios including broken-version contrast. 39 integration tests total (14 IPv6 + 15 try_quiet + 10 _step), all existing 75 unit tests still pass. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Truncate step() labels in setup.sh to match PS1 and Python The %-15s printf format pads short labels but does not truncate long ones. Change to %-15.15s so labels wider than 15 chars are clipped, matching the PowerShell .Substring(0,15) and Python label[:15] logic. * Remove sandbox integration tests from PR These test files are not part of the styling fix and should not ship with this PR. * Show error output on failure instead of suppressing it - install_python_stack.py: restore _red for patch_package_file warnings (was downgraded to _dim) - setup.ps1: capture winget output and show on failure for CUDA, Node, Python, and OpenSSL installs (was piped to Out-Null) - setup.ps1: always show git pull failure warning, not just in verbose mode * Show winget error output for Git and CMake installs on failure Same capture-and-print-on-failure pattern already used for Node, Python, CUDA, and OpenSSL winget installs. * fix: preserve stderr for _run_quiet error messages in setup.sh The step() helper writes to stdout, but _run_quiet's error header was originally sent to stderr (>&2). Without the redirect, callers that separate stdout/stderr would miss the failure headline while still seeing the log body on stderr. Add >&2 to both step calls inside _run_quiet to match main's behavior. * feat: add --verbose flag to setup and update commands Wire UNSLOTH_VERBOSE=1 through _run_setup_script() so that 'unsloth studio update --verbose' (and the deprecated 'setup') passes the flag to setup.sh / setup.ps1 / install_python_stack.py. * fix(studio): honor verbose logging and keep llama.cpp failures non-blocking * fix(studio): switch installer to 'studio update' and normalize Windows setup logs * chore(studio): refine localhost tip and remove skip-base setup nois * fix(studio): align Windows setup logs with Linux style and improve startup tips * fix(studio): align Windows setup logs with Linux style * refactor(windows-installer): align install/setup logs with Linux style and silence auto-launch output * refactor(windows): align installer/setup output with Linux style and reduce default verbosity * refactor(windows): match install.ps1 output style/colors to setup and quiet default logs * fix(studio-banner): update personal-computer localhost tip * fix(setup.sh): restore verbose llama.cpp build output while keeping default quiet mode * fix(install.sh): align installer logging with setup style and restore POSIX-safe color output * fix(install.sh): preserve installer reliability and launch visibility Export verbose mode for child setup processes, harden install command handling under set -e, and keep first-run studio launch non-silent so users can always see URL and port fallback output. * fix(windows installer): keep exit semantics and degrade status accurate Use quiet command redirection that preserves native exit codes, keep startup output visible on first launch, and report limited install status when llama.cpp is unavailable. * fix(setup.sh): improve log clarity and enforce GGUF degraded signaling Restore clean default setup output, add verbose-only diagnostics, fail fast on Colab dependency install errors, and return non-zero when GGUF prerequisites or llama.cpp artifacts are unavailable. * fix(installer): harden bash preflight and PowerShell GPU checks Fail fast when bash is unavailable before invoking setup.sh, and replace remaining nvidia-smi pipeline checks with stream redirection patterns that preserve reliable native exit-code handling. * fix(windows): keep verbose output visible while preserving exit codes Ensure PowerShell wrapper helpers in install/update stream native command output to host without returning it as function output, so npm logs no longer corrupt exit-code checks in verbose mode. * fix(windows): avoid sticky UNSLOTH_VERBOSE and gate studio update verbosity * Fix degraded llama.cpp exit code, PS verbose stderr, banner URLs, npm verbose - setup.sh: Do not exit non-zero when llama.cpp is unavailable; the footer already reports the limitation, and install.sh runs under set -e so a non-zero exit aborts the entire install including PATH/shortcuts/launch. - setup.ps1: Remove $? check in Invoke-SetupCommand verbose path; PS 5.1 sets $? = $false when native commands write to stderr even with exit 0. Merge stderr into stdout with 2>&1 and rely solely on $LASTEXITCODE. - startup_banner.py: Show the actual bound address when Studio is bound to a non-loopback interface instead of always showing 127.0.0.1/localhost. - setup.sh: Use run_quiet_no_exit instead of run_quiet_no_exit_always for npm install steps so --verbose correctly surfaces npm output. * Fix install.ps1 verbose stderr, propagate UNSLOTH_VERBOSE, fix git clone verbose - install.ps1: Apply same Invoke-InstallCommand fix as setup.ps1 -- merge stderr into stdout with 2>&1 and drop the $? check that misclassifies successful native commands on PS 5.1. - install.ps1 + setup.ps1: Export UNSLOTH_VERBOSE=1 to the process env when --verbose is passed so child processes like install_python_stack.py also run in verbose mode. - setup.sh: Use run_quiet_no_exit for git clone llama.cpp so --verbose correctly surfaces clone diagnostics during source-build fallback. * Surface prebuilt llama.cpp output in verbose mode, remove dead code, fix banner - setup.sh: Use tee in verbose mode for prebuilt llama.cpp installer so users can see download/validation progress while still capturing the log for structured error reporting on failure. - setup.ps1: Same fix for Windows -- use Tee-Object in verbose mode. - setup.sh: Remove run_quiet_no_exit_always() which has no remaining callers. - startup_banner.py: Avoid printing the same URL twice when Studio is bound to a specific non-loopback address that matches the display host. * Fix run_install_cmd exit code after failed if-statement The previous pattern 'if "$@"; then return 0; fi; _rc=$?' always captured $? = 0 because $? reflects the if-statement result, not the command's exit code. Switch to '"$@" && return 0; _rc=$?' which preserves the actual command exit code on failure. Applies to both verbose and quiet branches. * Fix _run_quiet exit code, double uv install, missing --local flag - setup.sh: Fix _run_quiet verbose path that always captured exit code 0 due to $? resetting after if-then-fi with no else. Switch to the same '"$@" && return 0; exit_code=$?' pattern used in install.sh. - setup.sh: Consolidate the two uv install branches (verbose + quiet) into a single attempt with conditional output. Previously, when verbose mode was on and the install failed, a second silent attempt was made. - install.ps1: Pass --local flag to 'unsloth studio update' when $StudioLocalInstall is true. Without this, studio.py's update() command overwrites STUDIO_LOCAL_INSTALL to "0", which could cause issues if setup.ps1 or install_python_stack.py later checks that variable. * Revert SKIP_STUDIO_BASE change for --no-torch, restore install banners - Revert SKIP_STUDIO_BASE from 0 to 1 for --no-torch. install.sh already installs unsloth+unsloth-zoo and no-torch-runtime.txt before calling setup.sh, so letting install_python_stack.py redo it was redundant and slowed down --no-torch installs for no benefit. - Restore the "Unsloth Studio installed!" success banner and "starting Unsloth Studio..." launch message so users get clear install completion feedback before the server starts. * Make llama.cpp build failure a hard error with proper cleanup - setup.sh: Restore exit 1 when _LLAMA_CPP_DEGRADED is true. GGUF inference requires a working llama.cpp build, so this should be a hard failure, not a silent degradation. - install.sh: Catch setup.sh's non-zero exit with '|| _SETUP_EXIT=$?' instead of letting set -e abort immediately. This ensures PATH setup, symlinks, and shortcuts still get created so the user can fix the build deps and retry with 'unsloth studio update'. After post-install steps, propagate the failure with a clear error message. * Revert install.ps1 to 'studio setup' to preserve SKIP_STUDIO_BASE 'studio update' pops SKIP_STUDIO_BASE from the environment, which defeats the fast-path version check added in PR #4667. When called from install.ps1 (which already installed packages), SKIP_STUDIO_BASE=1 must survive into setup.ps1 so it skips the redundant PyPI check and package reinstallation. 'studio setup' does not modify env vars. * Remove deprecation message from 'studio setup' command install.ps1 uses 'studio setup' (not 'studio update') to preserve SKIP_STUDIO_BASE. The deprecation message was confusing during first install since the user never typed the command. * Fix stale env vars, scope degraded exit, generic error message for PR #4651 - install.ps1: Always set STUDIO_LOCAL_INSTALL and clear STUDIO_LOCAL_REPO when not using --local, to prevent stale values from a previous --local run in the same PowerShell session. Fix log messages to say 'setup' not 'update' since we call 'studio setup'. - setup.sh: Only exit non-zero for degraded llama.cpp when called from the installer (SKIP_STUDIO_BASE=1). Direct 'unsloth studio update' keeps degraded installs successful since Studio is still usable for non-GGUF workflows and the footer already reports the limitation. - install.sh: Make the setup failure error message generic instead of GGUF-specific, so unrelated failures (npm, Python deps) do not show misleading cmake/git recovery advice. * Show captured output on failure in quiet mode for PR #4651 Both Invoke-InstallCommand (install.ps1) and Invoke-SetupCommand (setup.ps1) now capture command output in quiet mode and display it in red when the command fails. This matches the behavior of run_install_cmd in install.sh where failure output is surfaced even in quiet mode, making cross-platform error debugging consistent. * Match degraded llama.cpp exit on Windows, fix --local recovery hint for PR #4651 - setup.ps1: Exit non-zero for degraded llama.cpp when called from install.ps1 (SKIP_STUDIO_BASE=1), matching setup.sh behavior. Direct 'unsloth studio update' keeps degraded installs successful. - install.sh: Show 'unsloth studio update --local' in the recovery message when the install was run with --local, so users retry with the correct flag instead of losing local checkout context. --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Daniel Han <danielhanchen@gmail.com> |
||
|
|
9477e7c43f
|
Bump minimum unsloth version to 2026.3.16 in install scripts (#4663)
Update install.sh and install.ps1 to require unsloth>=2026.3.16, matching the latest PyPI release. |
||
|
|
eacaf6827c
|
fix: no-torch install deps without pulling torch transitively (#4650)
Use --no-deps for ALL packages (unsloth, unsloth-zoo, and runtime deps) since the current PyPI metadata for unsloth still declares torch as a hard dependency. Runtime deps (typer, pydantic, safetensors, transformers, etc.) are installed from no-torch-runtime.txt with --no-deps to prevent transitive torch resolution from accelerate, peft, trl, and sentence-transformers. no-torch-runtime.txt now includes unsloth's own direct deps (typer, pydantic, pyyaml, nest-asyncio) since --no-deps skips those too. install.sh installs no-torch-runtime.txt directly (via helper function _find_no_torch_runtime). install.ps1 does the same via Find-NoTorchRuntimeFile. SKIP_STUDIO_BASE stays at 1 to avoid setup.sh fast-path issues. install_python_stack.py NO_TORCH branch does the same for unsloth studio update, using package_name instead of hardcoded "unsloth". |
||
|
|
b1c3a1e857
|
fix: replace [huggingfacenotorch] with no-torch-runtime.txt requirements (#4649)
The [huggingfacenotorch] extras only exist in pyproject.toml but are NOT published on PyPI, so uv pip install "unsloth[huggingfacenotorch]" fails on fresh installs from the registry. Fix: add studio/backend/requirements/no-torch-runtime.txt with the runtime deps (safetensors, transformers, datasets, accelerate, etc.) that mirror [huggingfacenotorch] from pyproject.toml. In no-torch mode: 1. install.sh/ps1 install unsloth + unsloth-zoo with --no-deps 2. SKIP_STUDIO_BASE=0 so install_python_stack.py's NO_TORCH branch runs 3. install_python_stack.py installs no-torch-runtime.txt |
||
|
|
5c7c3883cb
|
feat: update app icons to rounded logo (#4640)
Replace favicon.png, unsloth-gem.png, and unsloth.ico with rounded.png. Update install.sh to source rounded.png for Linux/macOS shortcuts. |
||
|
|
3a5e3bbd6d
|
Make Studio shortcuts launch in a visible terminal (#4638)
* Make Studio shortcuts launch in a visible terminal Studio shortcuts (Desktop/Start Menu) previously launched the server as a hidden background process. Closing the browser tab did not stop the server, leaving users with no obvious way to shut it down. This change makes shortcuts open a visible terminal window so users can see server output and close the terminal to stop Studio. Launcher changes (install.sh): - Add TTY detection in the launcher's main section. When a TTY is present (foreground mode), the launcher spawns a background browser-opener and then exec's the studio process directly. This means closing the terminal sends SIGHUP to studio, stopping it cleanly. When no TTY is present (background mode, e.g. macOS .app or headless), the existing _spawn_terminal behavior is preserved. - Add _open_browser_when_ready helper that polls health on the specific launch port and opens the browser once ready. - Add WSL fallback in _open_browser: uses powershell.exe Start-Process or cmd.exe /c start instead of unreliable xdg-open under WSL. Linux .desktop shortcut: - Change Terminal=false to Terminal=true so the desktop environment opens the user's default terminal emulator for the launcher. WSL support: - Remove the early-return that skipped WSL entirely. WSL now gets the launcher script and studio.conf written. - Add WSL shortcut creation: generates Windows Desktop and Start Menu .lnk files via a temp PowerShell script. Targets wt.exe (Windows Terminal) with automatic fallback to wsl.exe. Uses WSL_DISTRO_NAME for multi-distro setups. Windows launcher (install.ps1): - Add Find-FreeLaunchPort function that mirrors the Unix _find_launch_port logic, scanning Get-NetTCPConnection for busy ports and returning the first free port in the configured range. - Replace the hardcoded $basePort with the dynamic port result, with a MessageBox error dialog if no free port is found. * Fix review findings: lock race, WSL quoting, Windows port fallback Foreground lock race (10/10 reviewers): The foreground mode released the single-instance lock before exec, allowing a second launcher to acquire the lock and race for the same port during startup. Move lock release into the background subshell so it only happens after the health check passes. WSL shortcut quoting (10/10 reviewers): WSL_DISTRO_NAME values with spaces (e.g. "Ubuntu Preview", "Fedora Remix for WSL") were not quoted, causing the distro name to be split across multiple arguments. Add double-quoting around the distro name and launcher path in the generated shortcut arguments. Windows port fallback (3/10 reviewers): Find-FreeLaunchPort silently assumed no ports were listening when Get-NetTCPConnection was unavailable, which could return 8888 even when busy. Add a Test-PortBusy fallback that probes ports with TcpListener when Get-NetTCPConnection fails. Also scope the Get-NetTCPConnection query to only the port range we care about. * Skip powershell.exe shortcut creation if wslpath fails If wslpath -w fails (returns empty), do not attempt to pass a Linux-style path to powershell.exe -- it would always fail. Only run powershell.exe when we have a valid Windows path for the temp PS1 script. * Remove dead code and fix background health poll target - Remove unused _open_browser_when_ready function - Background mode now polls only the specific _launch_port instead of scanning all ports via _find_healthy_port, matching foreground behavior - Add launcher test harness (22 unit + 19 integration tests) * Fix port probe scope, lock ownership, and T4 test coverage - Test-PortBusy: bind on Any instead of Loopback to match Studio's 0.0.0.0 bind scope (prevents false-free in fallback path) - _release_lock: verify PID ownership before removing lock dir (prevents a timed-out subshell from deleting another launcher's lock) - T4 test: fail first curl call so the test actually exercises the lock-contention wait path instead of short-circuiting via fast path * Temporarily remove launcher test scripts Tests will be re-added in a follow-up PR to keep this diff focused on the launcher changes. |
||
|
|
3c9f0ed149
|
fix: use unsloth[huggingfacenotorch] instead of --no-deps in no-torch mode (#4647)
The previous --no-deps approach skipped ALL dependencies, not just torch. This left safetensors, transformers, datasets, accelerate, etc. missing, causing PackageNotFoundError at runtime. Fix: in no-torch mode, install unsloth[huggingfacenotorch] (which pulls all runtime deps except torch), then install unsloth-zoo with --no-deps (since zoo's published metadata still declares torch as a hard dep). This gives a working no-torch environment with all non-torch packages. Applied to all three installer files: install.sh, install.ps1, and studio/install_python_stack.py. |
||
|
|
e9ac785346
|
fix: install.sh Mac Intel compatibility + Studio no-torch support (#4624)
* fix: install.sh Mac Intel compatibility + Studio no-torch support (#4621) On Intel Macs (x86_64), PyTorch has no wheels for torch >= 2.3, so the installer crashes. Even when torch is absent, Studio crashes on startup because two files have bare top-level torch imports. Studio's GGUF inference (llama.cpp) does not need PyTorch. Training and HF-inference already isolate torch to subprocesses. Only 2 files in the server startup chain had top-level torch imports preventing startup. Changes: - install.sh: detect architecture, default to Python 3.12 on Intel Mac, skip torch install, add Python 3.13.8 guard for arm64, pass UNSLOTH_NO_TORCH env var to setup.sh - data_collators.py: remove unused `import torch` (no torch.* refs) - chat_templates.py: lazy-import IterableDataset into function bodies - install_python_stack.py: add IS_MACOS/NO_TORCH constants, skip torch-dependent packages, skip overrides.txt, skip triton on macOS No existing working flow changes. Linux/WSL and macOS arm64 behavior is identical. * tests: add test suite for Mac Intel compat + no-torch mode Shell tests (test_mac_intel_compat.sh): - version_ge edge cases (9 tests) - Architecture detection for Darwin x86_64/arm64, Linux x86_64/aarch64 - get_torch_index_url returns cpu on simulated Darwin - UNSLOTH_NO_TORCH propagation to both setup.sh branches Python unit tests (test_no_torch_filtering.py): - _filter_requirements with NO_TORCH_SKIP_PACKAGES - NO_TORCH env var parsing (true/1/TRUE/false/0/unset) - IS_MACOS constant check - Overrides skip and triton macOS skip guards Python import tests (test_studio_import_no_torch.py): - data_collators.py loads in isolated no-torch venv - chat_templates.py has no top-level torch imports - Negative control confirms import torch fails without torch * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * tests: add E2E sandbox tests for Mac Intel no-torch mode Replace static/synthetic test stubs with real sandbox tests: - Shell: E2E uv venv creation at Python 3.12, mock uv shim to verify torch install is skipped when MAC_INTEL=true, dynamic env propagation test for UNSLOTH_NO_TORCH in both local and non-local install paths - Python filtering: test real extras.txt and extras-no-deps.txt with NO_TORCH_SKIP_PACKAGES, subprocess mock of install_python_stack() for 5 platform configs (NO_TORCH+macOS, Windows+NO_TORCH, normal Linux, Windows-only, macOS-only), VCS URL and env marker edge cases - Python imports: parametrized Python 3.12+3.13 venv fixture, dataclass instantiation for all 3 collator classes, chat_templates.py exec with stubs, negative controls proving import torch and torchao install fail in no-torch venvs 91 total tests, all passing. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix: address reviewer findings for Intel Mac no-torch mode P1 fixes: - Auto-infer NO_TORCH in install_python_stack.py via platform.machine() so `unsloth studio update` preserves GGUF-only mode without needing the UNSLOTH_NO_TORCH env var (6/10 reviewers) - Add openai-whisper and transformers-cfg to NO_TORCH_SKIP_PACKAGES since both have unconditional torch dependencies (4/10 reviewers) - Skip unsloth-zoo on Intel Mac --local installs (depends on torch) in both migrated and fresh install paths (1/10) - Recreate stale 3.13 venvs as 3.12 on Intel Mac re-runs (1/10) - Detect Apple Silicon under Rosetta via sysctl hw.optional.arm64 and warn user to use native arm64 terminal (1/10) P2 fixes: - Wire new test files into tests/run_all.sh (4/10 reviewers) - Add update-path tests (skip_base=False) for Intel Mac - Add _infer_no_torch tests for platform auto-detection P3 fixes: - Fix macOS progress bar total (triton step skipped but was counted) - Fix temp file leak when Windows + NO_TORCH filters stack All tests pass: 30 shell, 66 Python (96 total). * feat: add --python override flag to install.sh Lets users force a specific Python version, e.g. ./install.sh --python 3.12. Addresses M2 Mac users whose systems resolve to a problematic 3.13.x patch. When --python is set, the Intel Mac stale-venv guard and 3.13.8 auto-downgrade are skipped so the user's choice is respected. * tests: add comprehensive E2E sandbox tests for no-torch mode Add test_e2e_no_torch_sandbox.py with 7 test groups (43 tests total) covering the full no-torch import chain, edge cases, and install logic: - Group 1: BEFORE vs AFTER import chain comparison (proves the bug existed and the fix works by synthetically prepending top-level torch imports) - Group 2: Dataclass instantiation without torch - Group 3: Edge cases with broken/fake torch modules on sys.path - Group 4: Hardware detection fallback to CPU without torch - Group 5: install.sh flag parsing, version resolution, arch detection - Group 6: install_python_stack.py NO_TORCH filtering - Group 7: Live server startup without torch (marked @server, skipped when studio venv is unavailable) All 43 tests pass on both Python 3.12 and 3.13 isolated venvs. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * feat: add --no-torch flag to install.sh/ps1, fix lazy import bug in dataset formatting - Fix chat_templates.py: narrow torch IterableDataset import into inner try/except ImportError so dataset.map() works without torch installed - Fix format_conversion.py: same lazy import fix for convert_chatml_to_alpaca and convert_alpaca_to_chatml - Add --no-torch flag to install.sh with unified SKIP_TORCH variable (driven by --no-torch flag OR MAC_INTEL auto-detection) - Add --no-torch flag to install.ps1 with $SkipTorch variable - Print CPU hint when no GPU detected and --no-torch not set - Replace MAC_INTEL guards with SKIP_TORCH in torch install sections - Update shell tests (40 pass) and Python tests (90 pass) * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix: address reviewer findings for --no-torch installer paths - Fix migrated-env branch in install.sh and install.ps1: check SKIP_TORCH first, then branch on STUDIO_LOCAL_INSTALL. Previously SKIP_TORCH+non-local fell into else and installed unsloth-zoo (which depends on torch), defeating --no-torch mode. - Fix $env:UNSLOTH_NO_TORCH leak in install.ps1: always set to "true" or "false" instead of only setting on the true branch. Prevents stale no-torch state from leaking across runs in the same PS session. - Fix install_python_stack.py update path: add NO_TORCH guard around base.txt install so unsloth studio update does not reinstall unsloth-zoo (which depends on torch) in no-torch mode. * fix: install unsloth + unsloth-zoo with --no-deps in no-torch mode Instead of skipping unsloth-zoo entirely (which breaks unsloth's dependency on it), install both packages with --no-deps so they are present but torch is not pulled in transitively. Applied consistently across all no-torch paths: migrated-env, fresh-local, fresh-non-local in install.sh, install.ps1, and install_python_stack.py. * chore: temporarily remove test files (will be added in a follow-up) * refactor: deduplicate SKIP_TORCH conditional branches in installers Collapse if/else blocks that differ only by --no-deps into a single branch with a conditional flag variable. Applied to migrated-env and fresh-local paths in install.sh, install.ps1, and install_python_stack.py. * fix: apply --no-deps to fresh non-local --no-torch install path The non-local else branch was missing $_no_deps_arg/$noDepsArg, so uv pip install unsloth would resolve torch from PyPI metadata (the published unsloth package still declares torch as a hard dep). Now --no-deps is applied consistently to all SKIP_TORCH code paths. --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> |
||
|
|
baabfa0a6e
|
Fix Colab huggingface-hub conflict, ensurepip fallback, bump to 2026.3.14 (#4603)
* Fix Colab huggingface-hub conflict, ensurepip fallback, bump to 2026.3.14 - colab.py / setup.sh: relax == pins to >= when installing studio.txt on Colab so huggingface-hub does not clobber Colab's bundled version (breaks transformers is_offline_mode import) - install_python_stack.py: when uv is unavailable and pip is missing (uv-created venvs), bootstrap via ensurepip before attempting upgrade - Bump version to 2026.3.14 - Bump installer min version pins to 2026.3.14 * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> |
||
|
|
b713a5085a
|
Bump installer min version to 2026.3.12 (#4600) | ||
|
|
bc9cf31478
|
Pin torch>=2.4,<2.11.0 in Studio installers (#4595)
torch 2.11.0 has a torch.compile/dynamo bug that causes a StopIteration crash in dict_keys_getitem when compiling MoE router functions (e.g. GptOssTopKRouter_forward). Pin to <2.11.0 until the upstream fix lands. Applies to both install.sh (Linux/macOS) and install.ps1 (Windows) fresh install paths. |
||
|
|
19e9c60a8e
|
Consolidate dual venvs and separate install from update (#4530)
* refactor: consolidate dual venvs into single ~/.unsloth/studio/unsloth_studio
* refactor: separate install.sh (first-time) from setup.sh (smart update with PyPI version check)
* fix: install.sh calls setup.sh directly, keep both setup and update CLI commands
* fix: use importlib.resources.files() directly without _path attribute
* fix: bootstrap uv before pip upgrade to handle uv venvs without pip
* fix: frontend 404 when launched via CLI, add global symlink to ~/.local/bin
* feat: add --local flag to install.sh and unsloth studio update for branch testing
* fix: resolve repo root from script location for --local installs
* feat: add --package flag to install.sh for testing with custom package names
* feat: add --package flag to unsloth studio update
* fix: always nuke venv in install.sh for clean installs
* revert: remove Windows changes, will handle in separate PR
* fix: error when --package is passed without an argument
* revert: restore Windows scripts to current main
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix: always explicitly set STUDIO_LOCAL_INSTALL and STUDIO_PACKAGE_NAME env vars
* fix: pass explicit STUDIO_LOCAL_REPO env var for --local installs
* fix: align banner box for Setup vs Update labels
* deprecate: hide 'unsloth studio setup' command, point users to update/install.sh
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix: check stdout not stdin for auto-launch detection (curl pipe fix)
* fix: update install URL to unsloth.ai/install.sh
* fix: update install.sh usage comments to unsloth.ai/install.sh
* fix: use --upgrade-package for base deps to preserve existing torch/CUDA installs
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix: --local install now also installs unsloth-zoo via base.txt before editable overlay
* fix: don't skip base packages for --local installs (editable needs unsloth-zoo)
* refactor: move --local full dep install to install.sh, keep SKIP_STUDIO_BASE for all paths
* feat: add migration support for old .venv and CWD-based installs in setup.sh
* Revert "feat: add migration support for old .venv and CWD-based installs in setup.sh"
This reverts commit
|
||
|
|
be2cd7087a
|
Add macOS and Linux desktop shortcuts to install.sh (#4568)
* Add macOS and Linux desktop shortcuts to install.sh Adds create_studio_shortcuts() function that creates platform-native shortcuts after `unsloth studio setup` completes, mirroring the Windows shortcut behavior from PR #4558. Linux: .desktop file in ~/.local/share/applications/ and ~/Desktop/ macOS: .app bundle in ~/Applications/ with Info.plist, exec stub, and optional .icns icon built from unsloth-gem.png via sips+iconutil Both platforms share a Bash launcher script at ~/.local/share/unsloth/launch-studio.sh that provides: - Health check with service fingerprint verification - Port scanning (8888-8908) via ss/lsof - PID-file single-instance guard (no flock dependency) - Terminal spawning (macOS: Terminal.app; Linux: gnome-terminal etc.) - Browser open after health poll with 60s timeout WSL is skipped (no native desktop environment). * Fix 6 issues found by 10 parallel reviewers 1. [10/10] Health check now supports wget as fallback to curl via _http_get() helper, matching the installer's own download() pattern. Previously wget-only systems would time out on every launch. 2. [9/10] Exe path substitution now escapes sed metacharacters (&, \, |) and shell single-quotes before injection, preventing launcher corruption for paths like /opt/R&D/bin/unsloth. 3. [4/10] Linux .desktop Exec= field now quotes the launcher path, fixing launches from home directories containing spaces. 4. [3/10] macOS AppleScript command now escapes backslashes and double-quotes before interpolation into do script "...", fixing Terminal.app launch failures. 5. [3/10] Single-instance guard now uses atomic mkdir instead of racy check-then-write PID file, preventing duplicate concurrent launches on rapid double-click. 6. [1/10] Launcher now scans for a free port via _find_launch_port() instead of always hardcoding -p 8888, so Studio starts correctly when another service already occupies port 8888. Also fixed: `open` command on Linux (openvt) no longer incorrectly triggers the macOS browser-open path -- now gated on uname=Darwin. * Fix mktemp guard and exe path escaping from PR review comments Two real issues identified from automated review comments: 1. Guard mktemp -d failure in macOS icns generation. If mktemp -d returned empty, dirname would resolve to / and rm -rf would attempt to delete the root directory. Now checks that the temp dir was actually created before proceeding. 2. Replace sed-based exe path substitution with a conf file approach. The previous sed escaping broke paths containing apostrophes (e.g. /home/O'Connor/) because the '\'' escape introduced backslashes that were then double-escaped by the metacharacter pass. Now writes UNSLOTH_EXE to a separate studio.conf file that the launcher sources at runtime, eliminating all sed metacharacter and shell quoting interaction issues. This also addresses the sed -i.bak portability concern (now moot since sed is no longer used on the launcher file). * Fix unbound variable crash and per-user lock in launcher - Use ${UNSLOTH_EXE:-} so set -u does not crash before the friendly error message when studio.conf is missing or empty. - Append $(id -u) to the fallback lock path so each user gets their own lock directory when XDG_RUNTIME_DIR is unset. * Mark desktop shortcut as trusted for GNOME/Nautilus On modern GNOME desktops, chmod +x alone is not sufficient to make a .desktop file launchable by double-click on ~/Desktop. Nautilus requires the metadata::trusted attribute to be set via gio, otherwise it shows a warning dialog instead of launching the application. |
||
|
|
acc881452f
|
fix: pin unsloth>=2026.3.11 in install.sh and install.ps1 (#4556)
Ensures both install scripts always pull a version that has the litellm removal fix. Without the pin, stale uv/pip caches could resolve the older 2026.3.10 which still had litellm in data-designer-deps.txt, causing setup to fail at step 8/11 while PyPI has litellm quarantined. |
||
|
|
4c1a6cb962
|
gate on min uv version and shortcut python candidate search if known (#4489)
* gate on min uv version and shortcut python candidate search if known * fix sort -V cross compat issue, run_quiet early exit on llamacpp, autolaunch * update launch message * Fix PR comments * auto launch and find open port * remove dev install * Fix review findings: major-version guard, non-fatal port fallback, tty comment, restore local * Remove autolaunch, clean up dead state and debug noise - Remove find_open_port, TTY-gated autolaunch, and </dev/tty redirection from install.sh; just print launch instructions - Remove unused BEST_MAJOR variable from studio/setup.sh - Remove stray "finished finding best python" debug echo - Fix stale comment "below 3.12" to "below 3.11" * Reject prerelease uv at exact minimum version boundary * Remove 2>/dev/null from version_ge numeric comparisons Let non-numeric version parts surface errors on stderr instead of being silently swallowed. --------- Co-authored-by: Daniel Han <danielhanchen@gmail.com> |
||
|
|
be901ecdea
|
Adding launch command to install scripts (#4477)
* Adding launch command to install scripts * Making launch only for interactive env |
||
|
|
d0e5a1d61e
|
Fix macOS install.sh: stdin consumption and Python discovery (#4472)
* Fix macOS install.sh: stdin consumption and Python discovery Two issues when running `curl | sh` on macOS: 1. Commands like `brew install` consume bytes from the piped stdin, causing the shell to lose its place in the script. The remaining source code gets printed as text instead of being executed, so users have to run the installer twice. Fixed by redirecting stdin from /dev/null for brew, apt-get, xcode-select, and the uv installer subprocess. 2. setup.sh searches for Python 3.11-3.13 on the system PATH via `compgen -c`. On macOS systems that only have Python 3.9 and/or 3.14, this fails with "No Python version between 3.11 and 3.13 found" even though uv already installed Python 3.13 into the venv. Fixed by adding the venv's bin/ to PATH before invoking `unsloth studio setup`. * Guard PATH export against empty VENV_ABS_BIN If cd into the venv bin/ fails, VENV_ABS_BIN would be empty and PATH would start with ":", causing the current directory to be searched for executables. Wrap the export in a non-empty check. |
||
|
|
6f129a214b
|
Fix Install commands for Windows + 1 line installs (#4447)
* One liner setup for unsloth studio * Fix install scripts: system deps, activation bugs, curl/wget support - install.sh: detect platform (macOS/Linux/WSL) and check for missing system dependencies (cmake, git, build-essential, libcurl4-openssl-dev). Prompt user once for permission to install all missing packages via brew (macOS) or sudo apt-get (Linux/WSL). Add wget fallback via download() helper since curl is not always present on minimal Linux installs. Fix nested curl|sh stdin stealing by downloading uv installer to a tempfile first. Replace venv activation (no-op in a pipe subshell) with explicit --python flag for uv pip install and direct venv binary invocation. Add idempotency guard for venv creation. Redirect stdin on unsloth studio setup to prevent pipe consumption. On macOS, check for Xcode Command Line Tools and trigger install if missing. - install.ps1: wrap script body in Install-UnslothStudio function so that errors use return instead of exit (exit kills the terminal when run via irm|iex). Remove activate.ps1 invocation entirely -- use explicit --python path for uv pip install and & $UnslothExe for studio setup. This avoids both the child-scope activation bug (& vs dot-source) and the execution policy error on default Windows systems. Add winget availability check with clear error message. Fix PATH refresh to append registry paths instead of replacing the session PATH. Add uv installer fallback via astral.sh PowerShell script if winget install does not put uv on PATH. Broaden Python version check to accept 3.11-3.13. Add idempotency guard for venv creation. - README.md: add wget one-liner alternative for systems without curl. * Fix Tailwind CSS v4 .gitignore bug on Windows (#4444) - Add .gitignore hiding workaround to setup.ps1 (matching existing setup.sh logic) so venv .gitignore files containing "*" don't prevent Tailwind's oxide scanner from finding .tsx source files - Add CSS size validation to setup.sh, setup.ps1, and build.sh to catch truncated Tailwind builds early - Remove stray force-rebuild overrides that made the "skip build if current" cache check dead code in both setup scripts - Add rm -rf dist to build.sh to force clean rebuilds for wheel packaging * Change default port 8000 to 8888, fix installer bugs, improve UX - Change default Studio port from 8000 to 8888 across all entry points (run.py, studio.py, ui.py, colab.py, vite.config.ts, setup scripts) - Update launch banner: "Launching with studio venv..." to "Launching Unsloth Studio... Please wait..." - Add "Open your web browser" banner and rename labels (Local -> Local Access, External -> Worldwide Web Address) - Fix venv idempotency: check for bin/python instead of just directory existence, clean up partial venvs on retry - Fix build.sh CSS validation: handle empty CSS case that silently bypassed the check with "integer expression expected" - Fix install.sh sudo handling: try apt-get without sudo first (works when root), then escalate with per-package tracking and user prompt - Fix install.ps1: check exit code from studio setup, fail on error - Add pciutils to WSL GGUF build dependencies - Apply same smart apt-get escalation pattern to studio/setup.sh * Use detected Python version for venv, abort on non-apt Linux - install.ps1: detect existing Python 3.11/3.12/3.13 and use that version for venv creation instead of always forcing 3.13 - install.sh: exit with error on non-apt Linux distros when required packages cannot be auto-installed, instead of silently continuing * Make sudo permission prompt more prominent with warning banner * Add Accept [Y/n] sudo prompt to studio/setup.sh for consistency * Fix native command exit code handling and sudo decline flow install.ps1: Add $LASTEXITCODE checks after winget (Python), uv venv, and uv pip install calls. $ErrorActionPreference only catches PowerShell cmdlet errors, not native executable failures. The Python check also handles winget returning non-zero for "already installed". setup.sh: Skip llama-server build when user declines sudo or sudo is unavailable. Previously the script continued to section 8 which would fail with confusing errors (e.g. "gcc: command not found") since build-essential was never installed. * Move rm -rf llama.cpp inside build branch to preserve existing install When _SKIP_GGUF_BUILD is set (user declined sudo or sudo unavailable), the previous rm -rf would destroy an already-working llama-server before the skip check ran. Move it inside the else branch so existing builds are preserved when the rebuild is skipped. --------- Co-authored-by: Daniel Han <danielhanchen@users.noreply.github.com> Co-authored-by: Daniel Han <danielhanchen@gmail.com> |