- Added model mismatch warnings in colorize, enhance-faces, and upscale routes.
- Improved error handling in colorize, enhance_faces, remove_bg, restore, and upscale scripts with detailed logging.
- Updated Dockerfile to align NCCL versions for compatibility.
- Introduced a new full tool audit script to test all tools for functionality and GPU usage.
- Created Playwright E2E tests for GPU-dependent tools to ensure proper functionality and performance.
Ubuntu mirrors (security.ubuntu.com) are frequently unreachable from
GitHub Actions runners, causing all amd64 Docker builds to fail.
Instead of installing Node.js via NodeSource apt repo (which requires
working Ubuntu mirrors for the initial apt-get update), copy the Node
binary and modules directly from the official node:22-bookworm image.
Also add retry with backoff to the system deps apt-get update.
Ubuntu security mirrors can be unreachable from GitHub Actions runners.
Add a retry loop with exponential backoff (15s, 30s, 45s) around
apt-get update in the Node.js install step for the CUDA base image.
- Parallelize all 14 model downloads using ThreadPoolExecutor (6 workers)
Downloads were sequential (~30 min), now concurrent (~5-10 min)
- Switch Docker cache from type=gha to type=registry (GHCR)
GHA cache has 10 GB limit causing blob eviction and corrupted builds
Registry cache has no size limit and persists across runner instances
- Add pip download cache mounts to all pip install layers
Prevents re-downloading packages when layers rebuild
Reference new parseApiError, formatZodErrors, global error handler,
and playwright.docker.config.ts infrastructure. Remove stale
partialTools concept since every tool maps to exactly one bundle.
The Playwright-based Docker e2e tests use test.describe() which is
incompatible with Vitest. Exclude tests/e2e-docker/ from Vitest's
test discovery, matching the existing tests/e2e/ exclusion.
- Eliminate [object Object] errors across all 20+ API routes
- Global Fastify error handler with full stack traces
- Image-to-PDF auth fix (Object.entries → headers.forEach)
- OCR verbose fallbacks with engine reporting
- Split multi-file with per-image subfolders in ZIP
- Batch support for blur-faces, strip-metadata, edit-metadata, vectorize
- Docker LOG_LEVEL=debug, PYTHONWARNINGS=default
- 20 Playwright e2e tests pass against Docker container
- Replace [object Object] errors with readable messages across all 20+ API
routes by normalizing Zod validation errors to strings (formatZodErrors)
- Add parseApiError() on frontend to defensively handle any details type
- Add global Fastify error handler with full stack traces in logs
- Fix image-to-pdf auth: Object.entries(headers) → headers.forEach()
- Fix passport-photo: safeParse + formatZodErrors, safe error extraction
- Fix OCR silent fallbacks: log exception type/message when falling back,
include actual engine used in API response and Docker logs
- Fix split tool: process all uploaded images, combine into ZIP with
subfolders per image
- Fix batch support for blur-faces, strip-metadata, edit-metadata,
vectorize: add processAllFiles branch for multi-file uploads
- Docker: LOG_LEVEL=debug, PYTHONWARNINGS=default for visibility
- Add Playwright e2e tests verifying all fixes against Docker container
Revised bundles so every tool belongs to exactly one bundle with no
partial functionality. OCR and noise-removal fully locked until
their bundle is installed. passport-photo includes mediapipe in the
Background Removal bundle. restore-photo gets its own bundle.
Development/testing always via Docker container.
Address single-venv strategy (avoid two-venv fragility), shared
package uninstall via reference counting, tool route registration
for uninstalled features (501 instead of 404), multi-bundle tool
graceful degradation, frontend feature status propagation, and
local development compatibility.
Reduce Docker image from ~30GB to ~5-6GB by making AI features
downloadable post-install. Users cherry-pick feature bundles
(Background Removal, OCR, etc.) from the UI after pulling.
Allows re-triggering the release workflow after a Docker push failure
without needing new commits. If semantic-release produces no new version
the workflow now uses the latest existing git tag for the Docker build.
Prevents git on Windows (core.autocrlf=true) from checking out shell
scripts and Dockerfiles with CRLF line endings, which causes a
bad interpreter error when Docker runs entrypoint.sh on Windows.
Set U2NET_HOME=/opt/models/rembg so rembg models pre-downloaded at
build time as root are found at runtime by the non-root ashim user.
Without this every fresh container re-downloaded the 973 MB BiRefNet
models on first background-removal request.
Apply the same fix to PaddleOCR: download to /opt/models/paddlex and
symlink into both /root/.paddlex and /app/.paddlex so PaddleX finds
models regardless of which HOME gosu resolves at runtime.
Fall back to per-request spawning in bridge.ts when the persistent
dispatcher crashes mid-request (e.g. OOM loading a large ONNX model),
so the operation succeeds instead of surfacing "Python dispatcher
exited unexpectedly" to the user.
Improve entrypoint.sh permission warning to mention Windows bind mounts
as the likely cause.
- ci.yml: skip lint/test/docker on docs-only and markdown-only pushes
- deploy-docs.yml: only rebuild GitHub Pages when apps/docs/** changes
- README.md: updated key features and content
- images: updated dashboard screenshots, removed stale images