unsloth/studio/install_python_stack.py
Daniel Han 1f12ba16df
Combine studio setup fixes: frontend caching, venv isolation, Windows CPU support (#4413)
* Allow Windows setup to complete without NVIDIA GPU

setup.ps1 previously hard-exited if nvidia-smi was not found, blocking
setup entirely on CPU-only or non-NVIDIA machines. The backend already
supports CPU and MLX (Apple Silicon) in chat-only GGUF mode, and the
Linux/Mac setup.sh handles missing GPUs gracefully.

Changes:
- Convert the GPU check from a hard exit to a warning
- Guard CUDA toolkit installation behind $HasNvidiaSmi
- Install CPU-only PyTorch when no GPU is detected
- Build llama.cpp without CUDA flags when no GPU is present
- Update doc comment to reflect CPU support

* Cache frontend build across setup runs

Skip the frontend npm install + build if frontend/dist already exists.
Previously setup.ps1 nuked node_modules and package-lock.json on every
run, and both scripts always rebuilt even when dist/ was already present.

On a git clone editable install, the first setup run still builds the
frontend as before. Subsequent runs skip it, saving several minutes.
To force a rebuild, delete frontend/dist and re-run setup.

* Show pip progress for PyTorch download on Windows

The torch CUDA wheel is ~2.8 GB and the CPU wheel is ~300 MB. With
| Out-Null suppressing all output, the install appeared completely
frozen with no feedback. Remove | Out-Null for the torch install
lines so pip's download progress bar is visible. Add a size hint
so users know the download is expected to take a while.

Also moves the Triton success message inside the GPU branch so it
only prints when Triton was actually installed.

* Guard CUDA env re-sanitization behind GPU check in llama.cpp build

The CUDA_PATH re-sanitization block (lines 1020-1033) references
$CudaToolkitRoot which is only set when $HasNvidiaSmi is true and
the CUDA Toolkit section runs. On CPU-only machines, $CudaToolkitRoot
is null, causing Split-Path to throw:

  Split-Path : Cannot bind argument to parameter 'Path' because it is null.

Wrap the entire block in `if ($HasNvidiaSmi -and $CudaToolkitRoot)`.

* Rebuild frontend when source files are newer than dist/

Instead of only checking if dist/ exists, compare source file timestamps
against the dist/ directory. If any file in frontend/src/ is newer than
dist/, trigger a rebuild. This handles the case where a developer pulls
new frontend changes and re-runs setup -- stale assets get rebuilt
automatically.

* Fix cmake not found on Windows after winget install

Two issues fixed:

1. After winget installs cmake, Refresh-Environment may not pick up the
   new PATH entry (MSI PATH changes sometimes need a new shell). Added a
   fallback that probes cmake's default install locations (Program Files,
   LocalAppData) and adds the directory to PATH explicitly if found.

2. If cmake is still unavailable when the llama.cpp build starts (e.g.
   winget failed silently or PATH was not updated), the build now skips
   gracefully with a [SKIP] warning instead of crashing with
   "cmake : The term 'cmake' is not recognized".

* Fix frontend rebuild detection and decouple oxc-validator install

Address review feedback:

- Check entire frontend/ directory for changes, not just src/.
  The build also depends on package.json, vite.config.ts,
  tailwind.config.ts, public/, and other config files. A change
  to any of these now triggers a rebuild.
- Move oxc-validator npm install outside the frontend build gate
  in setup.sh so it always runs on setup, matching setup.ps1
  which already had it outside the gate.

* Show cmake errors on failure and retry CUDA VS integration with elevation

Two fixes for issue #4405 (Windows setup fails at cmake configure):

1. cmake configure: capture output and display it on failure instead of
   piping to Out-Null. When the error mentions "No CUDA toolset found",
   print a hint about the CUDA VS integration files.

2. CUDA VS integration copy: when the direct Copy-Item fails (needs
   admin access to write to Program Files), retry with Start-Process
   -Verb RunAs to prompt for elevation. This is the root cause of the
   "No CUDA toolset found" cmake failure -- the .targets files that let
   MSBuild compile .cu files are missing from the VS BuildCustomizations
   directory.

* Address reviewer feedback: cmake PATH persistence, stale cache, torch error check

1. Persist cmake PATH to user registry so Refresh-Environment cannot
   drop it later in the same setup run. Previously the process-only
   PATH addition at phase 1 could vanish when Refresh-Environment
   rebuilt PATH from registry during phase 2/3 installs.

2. Clean stale CMake cache before configure. If a previous run built
   with CUDA and the user reruns without a GPU (or vice versa), the
   cached GGML_CUDA value would persist. Now the build dir is removed
   before configure.

3. Explicitly set -DGGML_CUDA=OFF for CPU-only builds instead of just
   omitting CUDA flags. This prevents cmake from auto-detecting a
   partial CUDA installation.

4. Fix CUDA cmake flag indentation -- was misaligned from the original
   PR, now consistently indented inside the if/else block.

5. Fail hard if pip install torch returns a non-zero exit code instead
   of silently continuing with a broken environment.

* Remove extra CUDA cmake flags to align Windows with Linux build

Drop GGML_CUDA_FA_ALL_QUANTS, GGML_CUDA_F16, GGML_CUDA_GRAPHS,
GGML_CUDA_FORCE_CUBLAS, and GGML_CUDA_PEER_MAX_BATCH_SIZE flags.
The Linux build in setup.sh only sets GGML_CUDA=ON and lets llama.cpp
use its defaults for everything else. Keep Windows consistent.

* Address reviewer round 2: GPU probe fallback, Triton check, stale binary rebuild

1. GPU detection: fallback to default nvidia-smi install locations
   (Program Files\NVIDIA Corporation\NVSMI, System32) when nvidia-smi
   is not on PATH. Prevents silent CPU-only provisioning on machines
   that have a GPU but a broken PATH.

2. Triton: check $LASTEXITCODE after pip install and print [WARN]
   on failure instead of unconditional [OK].

3. Stale llama-server: check CMakeCache.txt for GGML_CUDA setting
   and rebuild if the existing binary does not match the current GPU
   mode (e.g. CUDA binary on a now-CPU-only rerun, or vice versa).

* Fix frontend rebuild detection and npm dependency issues

Addresses reviewer feedback on the frontend caching logic:

1. setup.sh: Fix broken find command that caused exit under pipefail.
   The piped `find | xargs find -newer` had paths after the expression
   which GNU find rejects. Replaced with a simpler `find -maxdepth 1
   -type f -newer dist/` that checks ALL top-level files (catches
   index.html, bun.lock, etc. that the extension allowlist missed).

2. setup.sh: Guard oxc-validator npm install behind `command -v npm`
   check. When the frontend build is skipped (dist/ is cached), Node
   bootstrap is also skipped, so npm may not be available.

3. setup.ps1: Replace Get-ChildItem -Include with explicit path
   probing for src/ and public/. PowerShell's -Include without a
   trailing wildcard silently returns nothing, so src/public changes
   were never detected. Also check ALL top-level files instead of
   just .json/.ts/.js/.mjs extensions.

* Fix studio setup: venv isolation, centralized .venv_t5, uv targeting

- All platforms (including Colab) now create ~/.unsloth/studio/.venv
  with --without-pip fallback for broken ensurepip environments
- Add --python sys.executable to uv pip install in install_python_stack.py
  so uv targets the correct venv instead of system Python
- Centralize .venv_t5 bootstrap in transformers_version.py with proper
  validation (checks required packages exist, not just non-empty dir)
- Replace ~150 lines of duplicated install code across 3 worker files
  with calls to the shared _ensure_venv_t5_exists() helper
- Use uv-if-present with pip fallback; do not install uv at runtime
- Add site.addsitedir() shim in colab.py so notebook cells can import
  studio packages from the venv without system-Python double-install
- Update .venv_t5 packages: huggingface_hub 1.3.0->1.7.1, add hf_xet
- Bump transformers pin 4.57.1->4.57.6 in requirements + constraints
- Add Fast-Install helper to setup.ps1 with uv+pip fallback
- Keep Colab-specific completion banner in setup.sh

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix nvidia-smi PATH persistence and cmake requirement for CPU-only

1. Store nvidia-smi as an absolute path ($NvidiaSmiExe) on first
   detection. All later calls (Get-CudaComputeCapability,
   Get-PytorchCudaTag, CUDA toolkit detection) use this absolute
   path instead of relying on PATH. This survives Refresh-Environment
   which rebuilds PATH from the registry and drops process-only
   additions.

2. Make cmake fatal for CPU-only installs. CPU-only machines depend
   entirely on llama-server for GGUF chat mode, so reporting "Setup
   Complete!" without it is misleading. GPU machines can still skip
   the llama-server build since they have other inference paths.

* Fix broken frontend freshness detection in setup scripts

- setup.sh: Replace broken `find | xargs find -newer` pipeline with
  single `find ... -newer` call. The old pipeline produced "paths must
  precede expression" errors (silently suppressed by 2>/dev/null),
  causing top-level config changes to never trigger a rebuild.
- setup.sh: Add `command -v npm` guard to oxc-validator block so it
  does not fail when Node was not installed (build-skip path).
- setup.ps1: Replace `Get-ChildItem -Include` (unreliable without
  -Recurse on PS 5.1) with explicit directory paths for src/ and
  public/ scanning.
- Both: Add *.html to tracked file patterns so index.html (Vite
  entry point) changes trigger a rebuild.
- Both: Use -print -quit instead of piping to head -1 for efficiency.

* Fix bugs found during review of PRs #4404, #4400, #4399

- setup.sh: Add || true guard to find command that checks frontend/src
  and frontend/public dirs, preventing script abort under set -euo
  pipefail when either directory is missing

- colab.py: Use sys.path.insert(0, ...) instead of site.addsitedir()
  so Studio venv packages take priority over system copies. Add warning
  when venv is missing instead of silently failing.

- transformers_version.py: _venv_t5_is_valid() now checks installed
  package versions via .dist-info metadata, not just directory presence.
  Prevents false positives from stale or wrong-version packages.

- transformers_version.py: _install_to_venv_t5() now passes --upgrade
  so pip replaces existing stale packages in the target directory.

- setup.ps1: CPU-only PyTorch install uses --index-url for cpu wheel
  and all install commands use Fast-Install (uv with pip fallback).

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix _venv_t5_is_valid dist-info loop exiting after first directory

Remove premature break that caused the loop over .dist-info directories
to exit after the first match even if it had no METADATA file. Now
continues iterating until a valid METADATA is found or all dirs are
exhausted.

* Capture error output on failure instead of discarding with Out-Null

setup.ps1: 6 locations changed from `| Out-Null` to `| Out-String` with
output shown on failure -- PyTorch GPU/CPU install, Triton install,
venv_t5 package loop, cmake llama-server and llama-quantize builds.

transformers_version.py: clean stale .venv_t5 directory before reinstall
when validation detects missing or version-mismatched packages.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix ModuleNotFoundError when CLI imports studio.backend.core

The backend uses bare "from utils.*" imports everywhere, relying on
backend/ being on sys.path. Workers and routes add it at startup, but
the CLI imports studio.backend.core as a package -- backend/ was never
added. Add sys.path setup at the top of core/__init__.py so lazy
imports resolve correctly regardless of entry point.

Fixes: unsloth inference unsloth/Qwen3-8B "who are you" crashing with
"No module named 'utils'"

* Fix frontend freshness check to detect all top-level file changes

The extension allowlist (*.json, *.ts, *.js, *.mjs, *.html) missed
files like bun.lock, so lockfile-only dependency changes could skip
the frontend rebuild. Check all top-level files instead.

* Add tiktoken to .venv_t5 for Qwen-family tokenizers

Qwen models use tiktoken-based tokenizers which fail when routed through
the transformers 5.x overlay without tiktoken installed. Add it to the
setup scripts (with deps for Windows) and runtime fallback list.

Integrates PR #4418.

* Fix tiktoken crash in _venv_t5_is_valid and stray brace in setup.ps1

_venv_t5_is_valid() crashed with ValueError on unpinned packages like
"tiktoken" (no ==version). Handle by splitting safely and skipping
version check for unpinned packages (existence check only).

Also remove stray closing brace in setup.ps1 tiktoken install block.

---------

Co-authored-by: Daniel Han <danielhanchen@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2026-03-18 03:52:25 -07:00

430 lines
14 KiB
Python

#!/usr/bin/env python3
# SPDX-License-Identifier: AGPL-3.0-only
# Copyright 2026-present the Unsloth AI Inc. team. All rights reserved. See /studio/LICENSE.AGPL-3.0
"""Cross-platform Python dependency installer for Unsloth Studio.
Called by both setup.sh (Linux / WSL) and setup.ps1 (Windows) after the
virtual environment is already activated. Expects `pip` and `python` on
PATH to point at the venv.
"""
from __future__ import annotations
import os
import shutil
import subprocess
import sys
import tempfile
import urllib.request
from pathlib import Path
IS_WINDOWS = sys.platform == "win32"
# ── Verbosity control ──────────────────────────────────────────────────────────
# By default the installer shows a minimal progress bar (one line, in-place).
# Set UNSLOTH_VERBOSE=1 in the environment to restore full per-step output:
# Linux/Mac: UNSLOTH_VERBOSE=1 ./studio/setup.sh
# Windows: $env:UNSLOTH_VERBOSE="1" ; .\studio\setup.ps1
VERBOSE: bool = os.environ.get("UNSLOTH_VERBOSE", "0") == "1"
# Progress bar state — updated by _progress() as each install step runs.
# _TOTAL counts: pip-upgrade + 7 shared steps + triton (non-Windows) + local-plugin + finalize
# Update _TOTAL here if you add or remove install steps in install_python_stack().
_STEP: int = 0
_TOTAL: int = 0 # set at runtime in install_python_stack() based on platform
# ── Paths ──────────────────────────────────────────────────────────────
SCRIPT_DIR = Path(__file__).resolve().parent
REQ_ROOT = SCRIPT_DIR / "backend" / "requirements"
SINGLE_ENV = REQ_ROOT / "single-env"
CONSTRAINTS = SINGLE_ENV / "constraints.txt"
LOCAL_DD_UNSTRUCTURED_PLUGIN = (
SCRIPT_DIR / "backend" / "plugins" / "data-designer-unstructured-seed"
)
# ── Color support ──────────────────────────────────────────────────────
def _enable_colors() -> bool:
"""Try to enable ANSI color support. Returns True if available."""
if not hasattr(sys.stdout, "fileno"):
return False
try:
if not os.isatty(sys.stdout.fileno()):
return False
except Exception:
return False
if IS_WINDOWS:
try:
import ctypes
kernel32 = ctypes.windll.kernel32
# Enable ENABLE_VIRTUAL_TERMINAL_PROCESSING (0x0004) on stdout
handle = kernel32.GetStdHandle(-11) # STD_OUTPUT_HANDLE
mode = ctypes.c_ulong()
kernel32.GetConsoleMode(handle, ctypes.byref(mode))
kernel32.SetConsoleMode(handle, mode.value | 0x0004)
return True
except Exception:
return False
return True # Unix terminals support ANSI by default
# Colors disabled — Colab and most CI runners render ANSI fine, but plain output
# is cleaner in the notebook cell. Re-enable by setting _HAS_COLOR = _enable_colors()
_HAS_COLOR = False
def _green(msg: str) -> str:
return f"\033[92m{msg}\033[0m" if _HAS_COLOR else msg
def _cyan(msg: str) -> str:
return f"\033[96m{msg}\033[0m" if _HAS_COLOR else msg
def _red(msg: str) -> str:
return f"\033[91m{msg}\033[0m" if _HAS_COLOR else msg
def _progress(label: str) -> None:
"""Print an in-place progress bar for the current install step.
Uses only stdlib (sys.stdout) — no extra packages required.
In VERBOSE mode this is a no-op; per-step labels are printed by run() instead.
"""
global _STEP
_STEP += 1
if VERBOSE:
return # verbose mode: run() already printed the label
width = 20
filled = int(width * _STEP / _TOTAL)
bar = "=" * filled + "-" * (width - filled)
end = "\n" if _STEP >= _TOTAL else "" # newline only on the final step
sys.stdout.write(f"\r[{bar}] {_STEP:2}/{_TOTAL} {label:<40}{end}")
sys.stdout.flush()
def run(
label: str, cmd: list[str], *, quiet: bool = True
) -> subprocess.CompletedProcess[bytes]:
"""Run a command; on failure print output and exit."""
if VERBOSE:
print(f" {label}...")
result = subprocess.run(
cmd,
stdout = subprocess.PIPE if quiet else None,
stderr = subprocess.STDOUT if quiet else None,
)
if result.returncode != 0:
print(_red(f"{label} failed (exit code {result.returncode}):"))
if result.stdout:
print(result.stdout.decode(errors = "replace"))
sys.exit(result.returncode)
return result
# Packages to skip on Windows (require special build steps)
WINDOWS_SKIP_PACKAGES = {"open_spiel", "triton_kernels"}
# ── uv bootstrap ──────────────────────────────────────────────────────
USE_UV = False # Set by _bootstrap_uv() at the start of install_python_stack()
UV_NEEDS_SYSTEM = False # Set by _bootstrap_uv() via probe
def _bootstrap_uv() -> bool:
"""Check if uv is available and probe whether --system is needed."""
global UV_NEEDS_SYSTEM
if not shutil.which("uv"):
return False
# Probe: try a dry-run install targeting the current Python explicitly.
# Without --python, uv can ignore the activated venv on some platforms.
probe = subprocess.run(
["uv", "pip", "install", "--dry-run", "--python", sys.executable, "pip"],
stdout = subprocess.PIPE,
stderr = subprocess.STDOUT,
)
if probe.returncode != 0:
# Retry with --system (some envs need it when uv can't find a venv)
probe_sys = subprocess.run(
["uv", "pip", "install", "--dry-run", "--system", "pip"],
stdout = subprocess.PIPE,
stderr = subprocess.STDOUT,
)
if probe_sys.returncode != 0:
return False # uv is broken, fall back to pip
UV_NEEDS_SYSTEM = True
return True
def _filter_requirements(req: Path, skip: set[str]) -> Path:
"""Return a temp copy of a requirements file with certain packages removed."""
lines = req.read_text(encoding = "utf-8").splitlines(keepends = True)
filtered = [
line
for line in lines
if not any(line.strip().lower().startswith(pkg) for pkg in skip)
]
tmp = tempfile.NamedTemporaryFile(
mode = "w",
suffix = ".txt",
delete = False,
encoding = "utf-8",
)
tmp.writelines(filtered)
tmp.close()
return Path(tmp.name)
def _translate_pip_args_for_uv(args: tuple[str, ...]) -> list[str]:
"""Translate pip flags to their uv equivalents."""
translated: list[str] = []
for arg in args:
if arg == "--no-cache-dir":
continue # uv cache is fast; drop this flag
elif arg == "--force-reinstall":
translated.append("--reinstall")
else:
translated.append(arg)
return translated
def _build_pip_cmd(args: tuple[str, ...]) -> list[str]:
"""Build a standard pip install command."""
cmd = [sys.executable, "-m", "pip", "install"]
cmd.extend(args)
return cmd
def _build_uv_cmd(args: tuple[str, ...]) -> list[str]:
"""Build a uv pip install command with translated flags."""
cmd = ["uv", "pip", "install"]
if UV_NEEDS_SYSTEM:
cmd.append("--system")
# Always pass --python so uv targets the correct environment.
# Without this, uv can ignore an activated venv and install into
# the system Python (observed on Colab and similar environments).
cmd.extend(["--python", sys.executable])
cmd.extend(_translate_pip_args_for_uv(args))
cmd.append("--torch-backend=auto")
return cmd
def pip_install(
label: str,
*args: str,
req: Path | None = None,
constrain: bool = True,
) -> None:
"""Build and run a pip install command (uses uv when available, falls back to pip)."""
constraint_args: list[str] = []
if constrain and CONSTRAINTS.is_file():
constraint_args = ["-c", str(CONSTRAINTS)]
actual_req = req
if req is not None and IS_WINDOWS and WINDOWS_SKIP_PACKAGES:
actual_req = _filter_requirements(req, WINDOWS_SKIP_PACKAGES)
req_args: list[str] = []
if actual_req is not None:
req_args = ["-r", str(actual_req)]
try:
if USE_UV:
uv_cmd = _build_uv_cmd(args) + constraint_args + req_args
if VERBOSE:
print(f" {label}...")
result = subprocess.run(
uv_cmd,
stdout = subprocess.PIPE,
stderr = subprocess.STDOUT,
)
if result.returncode == 0:
return
print(_red(f" uv failed, falling back to pip..."))
if result.stdout:
print(result.stdout.decode(errors = "replace"))
pip_cmd = _build_pip_cmd(args) + constraint_args + req_args
run(f"{label} (pip)" if USE_UV else label, pip_cmd)
finally:
if actual_req is not None and actual_req != req:
actual_req.unlink(missing_ok = True)
def download_file(url: str, dest: Path) -> None:
"""Download a file using urllib (no curl dependency)."""
urllib.request.urlretrieve(url, dest)
def patch_package_file(package_name: str, relative_path: str, url: str) -> None:
"""Download a file from url and overwrite a file inside an installed package."""
result = subprocess.run(
[sys.executable, "-m", "pip", "show", package_name],
capture_output = True,
text = True,
)
if result.returncode != 0:
print(_red(f" ⚠️ Could not find package {package_name}, skipping patch"))
return
location = None
for line in result.stdout.splitlines():
if line.lower().startswith("location:"):
location = line.split(":", 1)[1].strip()
break
if not location:
print(_red(f" ⚠️ Could not determine location of {package_name}"))
return
dest = Path(location) / relative_path
print(_cyan(f" Patching {dest.name} in {package_name}..."))
download_file(url, dest)
# ── Main install sequence ─────────────────────────────────────────────
def install_python_stack() -> int:
global USE_UV, _STEP, _TOTAL
_STEP = 0
_TOTAL = 10 if IS_WINDOWS else 11
# 1. Upgrade pip (needed even with uv as fallback and for bootstrapping)
_progress("pip upgrade")
run("Upgrading pip", [sys.executable, "-m", "pip", "install", "--upgrade", "pip"])
# Try to use uv for faster installs
USE_UV = _bootstrap_uv()
# 2. Core packages: unsloth-zoo + unsloth
_progress("base packages")
pip_install(
"Installing base packages",
"--no-cache-dir",
req = REQ_ROOT / "base.txt",
)
# 3. Extra dependencies
_progress("unsloth extras")
pip_install(
"Installing additional unsloth dependencies",
"--no-cache-dir",
req = REQ_ROOT / "extras.txt",
)
# 3b. Extra dependencies (no-deps) — audio model support etc.
_progress("extra codecs")
pip_install(
"Installing extras (no-deps)",
"--no-deps",
"--no-cache-dir",
req = REQ_ROOT / "extras-no-deps.txt",
)
# 4. Overrides (torchao, transformers) — force-reinstall
_progress("dependency overrides")
pip_install(
"Installing dependency overrides",
"--force-reinstall",
"--no-cache-dir",
req = REQ_ROOT / "overrides.txt",
)
# 5. Triton kernels (no-deps, from source)
if not IS_WINDOWS:
_progress("triton kernels")
pip_install(
"Installing triton kernels",
"--no-deps",
"--no-cache-dir",
req = REQ_ROOT / "triton-kernels.txt",
constrain = False,
)
# # 6. Patch: override llama_cpp.py with fix from unsloth-zoo feature/llama-cpp-windows-support branch
# patch_package_file(
# "unsloth-zoo",
# os.path.join("unsloth_zoo", "llama_cpp.py"),
# "https://raw.githubusercontent.com/unslothai/unsloth-zoo/refs/heads/main/unsloth_zoo/llama_cpp.py",
# )
# # 7a. Patch: override vision.py with fix from unsloth PR #4091
# patch_package_file(
# "unsloth",
# os.path.join("unsloth", "models", "vision.py"),
# "https://raw.githubusercontent.com/unslothai/unsloth/80e0108a684c882965a02a8ed851e3473c1145ab/unsloth/models/vision.py",
# )
# # 7b. Patch : override save.py with fix from feature/llama-cpp-windows-support
# patch_package_file(
# "unsloth",
# os.path.join("unsloth", "save.py"),
# "https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/unsloth/save.py",
# )
# 8. Studio dependencies
_progress("studio deps")
pip_install(
"Installing studio dependencies",
"--no-cache-dir",
req = REQ_ROOT / "studio.txt",
)
# 9. Data-designer dependencies
_progress("data designer deps")
pip_install(
"Installing data-designer base dependencies",
"--no-cache-dir",
req = SINGLE_ENV / "data-designer-deps.txt",
)
# 10. Data-designer packages (no-deps to avoid conflicts)
_progress("data designer")
pip_install(
"Installing data-designer",
"--no-cache-dir",
"--no-deps",
req = SINGLE_ENV / "data-designer.txt",
)
# 11. Local Data Designer seed plugin
if not LOCAL_DD_UNSTRUCTURED_PLUGIN.is_dir():
print(
_red(
f"❌ Missing local plugin directory: {LOCAL_DD_UNSTRUCTURED_PLUGIN}",
),
)
return 1
_progress("local plugin")
pip_install(
"Installing local data-designer unstructured plugin",
"--no-cache-dir",
"--no-deps",
str(LOCAL_DD_UNSTRUCTURED_PLUGIN),
constrain = False,
)
# 12. Patch metadata for single-env compatibility
_progress("finalizing")
run(
"Patching single-env metadata",
[sys.executable, str(SINGLE_ENV / "patch_metadata.py")],
)
# 13. Final check (silent; third-party conflicts are expected)
subprocess.run(
[sys.executable, "-m", "pip", "check"],
stdout = subprocess.DEVNULL,
stderr = subprocess.DEVNULL,
)
print(_green("✅ Python dependencies installed"))
return 0
if __name__ == "__main__":
sys.exit(install_python_stack())