Commit graph

5 commits

Author SHA1 Message Date
Ashim
08a7ffe403 Enhance logging and error handling across tools; add full tool audit and Playwright tests
- Added model mismatch warnings in colorize, enhance-faces, and upscale routes.
- Improved error handling in colorize, enhance_faces, remove_bg, restore, and upscale scripts with detailed logging.
- Updated Dockerfile to align NCCL versions for compatibility.
- Introduced a new full tool audit script to test all tools for functionality and GPU usage.
- Created Playwright E2E tests for GPU-dependent tools to ensure proper functionality and performance.
2026-04-17 23:06:31 +08:00
Siddharth Kumar Sah
85b1cfc10a chore: rename Stirling-Image to ashim across entire codebase
Complete rebrand from Stirling-Image to ashim following the project
move to https://github.com/ashim-hq/ashim.

Changes across 117 files:
- Package scope: @stirling-image/* → @ashim/*
- GitHub URLs: stirling-image/stirling-image → ashim-hq/ashim
- Docker Hub: stirlingimage/stirling-image → ashimhq/ashim
- GitHub Pages: stirling-image.github.io → ashim-hq.github.io
- All branding text: "Stirling Image" → "ashim"
- Docker service/volumes/user: stirling → ashim
- Database: stirling.db → ashim.db
- localStorage keys: stirling-token → ashim-token
- Environment variables: STIRLING_GPU → ASHIM_GPU
- Python cache dirs: .cache/stirling-image → .cache/ashim
- SVG filter IDs, test prefixes, and all other references
2026-04-14 20:55:42 +08:00
Siddharth Kumar Sah
8d2f401512 fix: use torch.cuda for GPU detection instead of onnxruntime providers
onnxruntime-gpu reports CUDAExecutionProvider as "available" just
because the library was compiled with CUDA support, even on machines
with no GPU. This made gpu_available() return True incorrectly,
causing upscale.py to try torch.device("cuda") and fall back to
Lanczos instead of running Real-ESRGAN on CPU.

torch.cuda.is_available() actually probes the hardware. Use it as
the single source of truth for GPU detection.

Verified: CUDA image on Apple Silicon (no GPU) now correctly reports
gpu: false and all AI tools run on CPU without crashes.
2026-04-05 22:24:16 +08:00
Siddharth Kumar Sah
a291d1fe0b fix: prevent false GPU detection when CUDA image runs without GPU
The STIRLING_GPU=true env var was baked into the :cuda Dockerfile,
which made gpu_available() return True without checking actual
hardware. On machines without a GPU, this would crash upscale.py
(torch.device("cuda") fails) and ocr.py (PaddleOCR use_gpu=True).

Fix: the env var can only disable GPU (set to false/0), never
force-enable it. Hardware detection always runs. Removed the
baked env var from the Dockerfile since it adds no value now.
2026-04-05 22:03:57 +08:00
Siddharth Kumar Sah
29a382e9e0 feat: add GPU/CUDA acceleration support (:cuda Docker tag)
Add a :cuda Docker image tag that auto-detects NVIDIA GPU at runtime
and falls back gracefully to CPU. Same pattern as Immich.

- New gpu.py shared utility for cached CUDA detection
- Background removal (rembg): pass CUDAExecutionProvider to ONNX Runtime
- Upscaling (Real-ESRGAN): use CUDA device + FP16 when GPU available
- OCR (PaddleOCR): enable use_gpu when CUDA detected
- Dispatcher reports GPU status at startup via readiness signal
- Admin health endpoint exposes GPU availability
- Dockerfile uses ARG GPU=false with conditional NVIDIA CUDA base image
- docker-compose.gpu.yml override for GPU users
- CI/CD workflows build and publish :cuda tag (amd64 only)

Three tags: :latest (CPU), :lite (no AI), :cuda (GPU with CPU fallback)
2026-04-05 19:12:45 +08:00