docs: comprehensive documentation and version sync (v1.2.4)

This commit is contained in:
Jacob Magar 2026-04-05 03:34:19 -04:00
parent 685775de25
commit 391463b942
46 changed files with 1489 additions and 498 deletions

View file

@ -1,7 +1,7 @@
{
"name": "unraid-mcp",
"displayName": "Unraid MCP",
"version": "1.2.3",
"version": "1.2.4",
"description": "Query, monitor, and manage Unraid servers via GraphQL API through MCP tools. Supports system info, Docker, VMs, array/parity, notifications, plugins, rclone, and live telemetry.",
"author": {
"name": "Jacob Magar",

View file

@ -1,6 +1,6 @@
{
"name": "unraid-mcp",
"version": "1.2.3",
"version": "1.2.4",
"description": "Unraid server management via MCP.",
"homepage": "https://github.com/jmagar/unraid-mcp",
"repository": "https://github.com/jmagar/unraid-mcp",

View file

@ -24,8 +24,6 @@ UNRAID_MCP_BEARER_TOKEN=your_bearer_token
# Safety flags
# ------------
UNRAID_MCP_ALLOW_DESTRUCTIVE=false
UNRAID_MCP_ALLOW_YOLO=false
UNRAID_MCP_DISABLE_HTTP_AUTH=false
# Docker user / network

View file

@ -7,6 +7,16 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
## [1.2.4] - 2026-04-04
### Added
- **Comprehensive test suite**: Added tests for core modules, configuration, validation, subscriptions, and edge cases
- **Test coverage documentation**: `tests/TEST_COVERAGE.md` with coverage map and gap analysis
### Changed
- **Documentation**: Comprehensive updates across CLAUDE.md, README, and reference docs
- **Version sync**: Fixed `pyproject.toml` version mismatch (was 1.2.2, now aligned with all manifests at 1.2.4)
## [1.2.3] - 2026-04-03
### Changed

View file

@ -383,8 +383,6 @@ just setup
| `UNRAID_MCP_PORT` | No | `6970` | Listen port for HTTP transports |
| `UNRAID_MCP_BEARER_TOKEN` | Conditional | — | Static Bearer token for HTTP transports; auto-generated on first start if unset |
| `UNRAID_MCP_DISABLE_HTTP_AUTH` | No | `false` | Set `true` to skip Bearer auth (use behind a reverse proxy that handles auth) |
| `UNRAID_MCP_ALLOW_DESTRUCTIVE` | No | `false` | Reserved safety flag |
| `UNRAID_MCP_ALLOW_YOLO` | No | `false` | Reserved safety flag |
| `DOCKER_NETWORK` | No | — | External Docker network to join; leave blank for default bridge |
| `PGID` | No | `1000` | Container process GID |
| `PUID` | No | `1000` | Container process UID |

View file

@ -1,6 +1,6 @@
{
"name": "unraid-mcp",
"version": "1.2.3",
"version": "1.2.4",
"description": "Query, monitor, and manage Unraid servers via GraphQL API through MCP tools. Supports system info, Docker, VMs, array/parity, notifications, plugins, and live telemetry.",
"mcpServers": {
"unraid-mcp": {

View file

@ -10,7 +10,7 @@ build-backend = "hatchling.build"
# ============================================================================
[project]
name = "unraid-mcp"
version = "1.2.2"
version = "1.2.4"
description = "MCP Server for Unraid API - provides tools to interact with an Unraid server's GraphQL API"
readme = "README.md"
license = {file = "LICENSE"}

932
tests/TEST_COVERAGE.md Normal file
View file

@ -0,0 +1,932 @@
# TEST_COVERAGE.md — `tests/test_live.sh`
Canonical live integration test for the `unraid-mcp` server. This document is the authoritative
reference for what the script tests, how every assertion is structured, and how to run each mode.
A QA engineer should be able to verify correctness of the script without executing it.
---
## 1. Overview
| Field | Value |
|---|---|
| Script | `tests/test_live.sh` |
| Service under test | Unraid home-server OS (NAS / hypervisor) |
| MCP server exercised | `unraid-mcp` — Python MCP server that proxies Unraid's GraphQL API |
| Transport protocols covered | Streamable-HTTP (primary), Docker container (build + run), stdio (subprocess) |
| Test approach | Direct JSON-RPC 2.0 over HTTP — no mcporter or secondary proxy dependency |
| Total tool subactions exercised | 47 (45 read-only + 2 destructive-guard bypass) |
| Destructive operations | None — all state-changing tools are invoked only with `confirm=true` to verify the guard bypasses correctly, not to execute the operation |
### What the script is not
- It is not a unit test. It requires (or optionally skips) a live Unraid API.
- It does not verify response _values_ beyond structural presence — it checks that the tool
returned HTTP 200 with `isError != true`, not that specific field values match expected
business data.
- It does not test write operations (container start/stop, VM actions beyond force_stop guard
check, array operations, etc.) to avoid causing data loss or service disruption.
---
## 2. Prerequisites
The script checks for these binaries at startup and exits with code `2` if either is absent:
- `curl` — HTTP client for all network requests
- `jq` — JSON parsing for all assertions
For Docker mode: `docker` must be in `PATH` (soft requirement — skipped with `SKIP` if absent).
For stdio mode: `uv` must be in `PATH` (soft requirement — skipped with `SKIP` if absent).
---
## 3. How to Run
### 3.1 Modes and flags
```bash
# Default: runs all three modes sequentially (http → docker → stdio)
./tests/test_live.sh
# HTTP only — fastest, requires a running server
./tests/test_live.sh --mode http
# Docker only — builds image, starts container, tests, tears down
./tests/test_live.sh --mode docker
# Stdio only — spawns server subprocess via uvx
./tests/test_live.sh --mode stdio
# All three modes explicitly
./tests/test_live.sh --mode all
# Override endpoint and token
./tests/test_live.sh --url http://myhost:6970/mcp --token mytoken
# Skip auth tests (use when behind an OAuth gateway that handles auth)
./tests/test_live.sh --skip-auth
# Skip tool smoke tests (no live Unraid API available — tests MCP protocol only)
./tests/test_live.sh --skip-tools
# Show raw HTTP response bodies alongside test output
./tests/test_live.sh --verbose
```
### 3.2 Environment variables
| Variable | Required for | Default | Description |
|---|---|---|---|
| `UNRAID_API_URL` | docker, stdio | `http://127.0.0.1:1` (dummy) | Unraid GraphQL API base URL |
| `UNRAID_API_KEY` | docker, stdio | `ci-fake-key` (dummy) | Unraid API key |
| `UNRAID_MCP_BEARER_TOKEN` | http, docker | auto-read from `~/.unraid-mcp/.env` | MCP bearer token for authenticated requests |
| `TOKEN` | http, docker | alias for above | Alternate env var for the bearer token |
| `PORT` | all | `6970` | Override the server port |
### 3.3 Token auto-detection
If `TOKEN` / `UNRAID_MCP_BEARER_TOKEN` is not set on the command line or in the environment,
the script reads `~/.unraid-mcp/.env` and extracts `UNRAID_MCP_BEARER_TOKEN=...` from it.
If the file does not exist or the variable is absent, `TOKEN` remains empty and auth tests
are silently skipped.
### 3.4 Exit codes
| Code | Meaning |
|---|---|
| `0` | All tests passed (or intentionally skipped) |
| `1` | One or more tests failed |
| `2` | Prerequisite check failed (`curl` or `jq` missing, or invalid `--mode`) |
---
## 4. Test Phases
The script is structured into four numbered phases, run in order within each mode. Phases 14
share common implementation functions; each mode (http, docker, stdio) calls them after
establishing its own transport.
### Phase 1 — Middleware (no auth)
**Purpose:** Verify that unauthenticated HTTP endpoints respond correctly. These endpoints must
be publicly accessible without a bearer token (RFC 8414 / OAuth protected resource metadata).
**Runs in:** HTTP mode and Docker mode. Not run in stdio mode.
### Phase 2 — Auth enforcement
**Purpose:** Verify that the MCP endpoint enforces bearer token authentication — rejecting
requests with no token (401), rejecting requests with a wrong token (401), and accepting
requests with the correct token.
**Runs in:** HTTP mode and Docker mode. Not run in stdio mode.
### Phase 3 — MCP Protocol
**Purpose:** Verify the MCP JSON-RPC handshake (`initialize`, `tools/list`, `ping`) works
correctly and returns well-formed responses with the expected structure.
**Runs in:** HTTP mode, Docker mode, and stdio mode (stdio has its own Phase 3 implementation).
### Phase 4 — Tool smoke-tests (non-destructive)
**Purpose:** Call every read-only `unraid` tool subaction and verify it returns HTTP 200 with
`isError != true`. No assertions are made on response field values — this phase proves
connectivity and basic API reachability.
**Runs in:** HTTP mode and Docker mode only. Skipped with `--skip-tools`.
### Phase 4b — Destructive action guards
**Purpose:** Verify that destructive operations do NOT require the user to re-confirm when
`confirm=true` is passed — i.e., `confirm=true` correctly bypasses the guard prompt.
---
## 5. Phase 1 — Middleware (no auth)
### 5.1 `/health` endpoint
| Field | Value |
|---|---|
| URL | `GET {base_url}/health` |
| Auth | None (unauthenticated) |
| Expected HTTP status | `200` |
| jq assertion | `.status == "ok"` |
| PASS | HTTP 200 AND body contains `{"status":"ok"}` |
| FAIL | Any other status code, or status field is not `"ok"` |
The base URL is derived from `MCP_URL` by stripping the trailing `/mcp` path segment
(e.g., `http://localhost:6970/mcp``http://localhost:6970`).
### 5.2 `/.well-known/oauth-protected-resource`
| Field | Value |
|---|---|
| URL | `GET {base_url}/.well-known/oauth-protected-resource` |
| Auth | None |
| Expected HTTP status | `200` |
| PASS | HTTP 200 |
| FAIL | Any other status |
On HTTP 200, two sub-assertions are evaluated:
**Sub-assertion A — `bearer_methods_supported` present:**
| Field | Value |
|---|---|
| jq filter | `.bearer_methods_supported \| length > 0` |
| PASS | Array is non-empty |
| FAIL | Array is absent, null, or empty |
| SKIP | Parent assertion (HTTP 200) failed |
**Sub-assertion B — `resource` field present:**
| Field | Value |
|---|---|
| jq filter | `.resource \| length > 0` |
| PASS | String is non-empty |
| FAIL | Field is absent, null, or empty |
| SKIP | Parent assertion (HTTP 200) failed |
### 5.3 `/.well-known/oauth-protected-resource/mcp`
| Field | Value |
|---|---|
| URL | `GET {base_url}/.well-known/oauth-protected-resource/mcp` |
| Auth | None |
| Expected HTTP status | `200` |
| PASS | HTTP 200 |
| FAIL | Any other status |
No sub-assertions — presence of the endpoint is sufficient.
---
## 6. Phase 2 — Auth enforcement
Phase 2 is skipped entirely if `--skip-auth` is passed, or if no token is configured (in which
case auth is assumed to be disabled). All three tests are marked `SKIP` with a reason string.
### 6.1 No-token request
**What it does:** Sends a `POST` to `MCP_URL` with a valid JSON-RPC `ping` payload but with
no `Authorization` header.
```
POST /mcp HTTP/1.1
Content-Type: application/json
Accept: application/json, text/event-stream
{"jsonrpc":"2.0","id":99,"method":"ping","params":null}
```
| Field | Value |
|---|---|
| Expected HTTP status | `401` |
| PASS | HTTP status is exactly `"401"` |
| FAIL | Any other status (e.g., `200` would indicate auth is disabled) |
### 6.2 Wrong-token request
**What it does:** Sends the same `ping` payload with a deliberately incorrect bearer token:
`Bearer this-is-the-wrong-token-intentionally`.
| Field | Value |
|---|---|
| Expected HTTP status | `401` |
| PASS (preferred) | HTTP 401 AND `.error == "invalid_token"` in response body |
| PASS (fallback) | HTTP 401 with any error field value (or absent) |
| FAIL | Any non-401 status |
The test inspects the response body's `.error` field. If it equals `"invalid_token"` the label
reads `"bad-token → 401 invalid_token"`; otherwise it reads `"bad-token → 401 (error field: …)"`.
Both are recorded as PASS — the sub-check on the error field value is informational.
### 6.3 Good-token request
**What it does:** Sends a full MCP `initialize` request with the configured valid bearer token.
```
POST /mcp HTTP/1.1
Content-Type: application/json
Accept: application/json, text/event-stream
Authorization: Bearer <TOKEN>
{"jsonrpc":"2.0","id":1,"method":"initialize","params":{
"protocolVersion":"2024-11-05",
"capabilities":{},
"clientInfo":{"name":"test_live","version":"0"}
}}
```
| Field | Value |
|---|---|
| Condition for PASS | HTTP status is NOT `401` AND NOT `403` |
| PASS | Any status other than 401 or 403 (typically 200) |
| FAIL | HTTP 401 or 403 |
This test does not assert a specific status — it only proves the server does not reject a
valid token.
---
## 7. Phase 3 — MCP Protocol
### 7.1 `initialize`
**What it does:** Posts MCP protocol initialization request.
```json
{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2024-11-05",
"capabilities": {},
"clientInfo": {"name": "test_live", "version": "0"}
}
}
```
| Field | Value |
|---|---|
| Expected HTTP status | `200` |
| PASS | HTTP 200 |
| FAIL | Any other status (with body excerpt in label) |
On HTTP 200, two sub-assertions:
**`serverInfo.name` present:**
| jq filter | `.result.serverInfo.name \| length > 0` |
|---|---|
| PASS | Name string is non-empty |
| FAIL | Field absent, null, or empty |
| SKIP | `initialize` returned non-200 |
**`protocolVersion` present:**
| jq filter | `.result.protocolVersion \| length > 0` |
|---|---|
| PASS | Version string is non-empty |
| FAIL | Field absent, null, or empty |
| SKIP | `initialize` returned non-200 |
The `mcp_post` helper extracts the `Mcp-Session-Id` response header and stores it in
`MCP_SESSION_ID`. Subsequent requests in the same mode run include this header automatically.
SSE response handling: if the response body contains lines starting with `data:`, the helper
extracts the first `data:` line and strips the prefix before parsing JSON.
### 7.2 `tools/list`
```json
{"jsonrpc":"2.0","id":2,"method":"tools/list","params":null}
```
| Field | Value |
|---|---|
| Expected HTTP status | `200` |
| PASS | HTTP 200 |
| FAIL | Any other status |
The tool count (`jq '.result.tools \| length'`) is printed in the PASS label for informational
purposes but not asserted against a minimum.
On HTTP 200, two sub-assertions:
**`unraid` tool present:**
| jq filter | `.result.tools[] \| select(.name == "unraid") \| .name` |
|---|---|
| PASS | Filter returns non-empty, non-null string |
| FAIL | No tool named `"unraid"` found |
| SKIP | `tools/list` returned non-200 |
**`diagnose_subscriptions` tool present:**
| jq filter | `.result.tools[] \| select(.name == "diagnose_subscriptions") \| .name` |
|---|---|
| PASS | Filter returns non-empty, non-null string |
| FAIL | No tool named `"diagnose_subscriptions"` found |
| SKIP | `tools/list` returned non-200 |
These two assertions confirm the server exposes exactly the two expected top-level tools.
### 7.3 `ping`
```json
{"jsonrpc":"2.0","id":3,"method":"ping","params":null}
```
| Field | Value |
|---|---|
| Expected HTTP status | `200` |
| PASS | HTTP 200 |
| SKIP (not FAIL) | Any non-200 status — `ping` is treated as optional |
Ping is not a required MCP method; the test tolerates absence.
---
## 8. Phase 4 — Tool Smoke Tests
All smoke tests use the `call_unraid` helper, which:
1. Builds a `tools/call` JSON-RPC request targeting the `unraid` tool with the given
`action` and `subaction` arguments.
2. Sends it via `mcp_post`.
3. PASS condition: HTTP 200 AND `.result.isError != true`.
4. FAIL condition: HTTP status other than 200, OR `.result.isError == true`.
When `isError` is true, the first 100 characters of `.result.content[0].text` are appended
to the FAIL label.
The JSON-RPC payload structure for each call:
```json
{
"jsonrpc": "2.0",
"id": <N>,
"method": "tools/call",
"params": {
"name": "unraid",
"arguments": {
"action": "<action>",
"subaction": "<subaction>"
}
}
}
```
Extra arguments (e.g., `provider_type`) are merged into the `arguments` object via jq.
### 8.1 Complete list of smoke-tested subactions
#### `health` action
| Test label | action | subaction | Extra args | Notes |
|---|---|---|---|---|
| `unraid health/check` | `health` | `check` | — | Basic connectivity check to Unraid API |
| `unraid health/test_connection` | `health` | `test_connection` | — | Tests GraphQL API reachability |
| `unraid health/diagnose` | `health` | `diagnose` | — | Detailed health diagnostic |
#### `system` action
| Test label | action | subaction | Extra args | Notes |
|---|---|---|---|---|
| `unraid system/overview` | `system` | `overview` | — | Full system overview |
| `unraid system/network` | `system` | `network` | — | Network interfaces and configuration |
| `unraid system/array` | `system` | `array` | — | Disk array state |
| `unraid system/registration` | `system` | `registration` | — | License/registration info |
| `unraid system/variables` | `system` | `variables` | — | Unraid system variables |
| `unraid system/metrics` | `system` | `metrics` | — | Performance metrics |
| `unraid system/services` | `system` | `services` | — | Running services |
| `unraid system/display` | `system` | `display` | — | Display/UI settings |
| `unraid system/config` | `system` | `config` | — | System configuration |
| `unraid system/online` | `system` | `online` | — | Online/connectivity status |
| `unraid system/owner` | `system` | `owner` | — | Server owner information |
| `unraid system/settings` | `system` | `settings` | — | System settings |
| `unraid system/server` | `system` | `server` | — | Single server info |
| `unraid system/servers` | `system` | `servers` | — | All known servers |
| `unraid system/flash` | `system` | `flash` | — | USB flash device info |
| `unraid system/ups_devices` | `system` | `ups_devices` | — | UPS device list |
#### `array` action
| Test label | action | subaction | Extra args | Notes |
|---|---|---|---|---|
| `unraid array/parity_status` | `array` | `parity_status` | — | Current parity check status |
| `unraid array/parity_history` | `array` | `parity_history` | — | Historical parity check records |
#### `disk` action
| Test label | action | subaction | Extra args | Notes |
|---|---|---|---|---|
| `unraid disk/shares` | `disk` | `shares` | — | User shares list |
| `unraid disk/disks` | `disk` | `disks` | — | All disk devices |
| `unraid disk/log_files` | `disk` | `log_files` | — | Available log files |
#### `docker` action
| Test label | action | subaction | Extra args | Notes |
|---|---|---|---|---|
| `unraid docker/list` | `docker` | `list` | — | All Docker containers |
| `unraid docker/networks` | `docker` | `networks` | — | Docker networks |
#### `vm` action
| Test label | action | subaction | Extra args | Notes |
|---|---|---|---|---|
| `unraid vm/list` | `vm` | `list` | — | All virtual machines |
#### `notification` action
| Test label | action | subaction | Extra args | Notes |
|---|---|---|---|---|
| `unraid notification/overview` | `notification` | `overview` | — | Notification summary counts |
| `unraid notification/list` | `notification` | `list` | — | Full notification list |
| `unraid notification/recalculate` | `notification` | `recalculate` | — | Trigger notification recalculation |
#### `user` action
| Test label | action | subaction | Extra args | Notes |
|---|---|---|---|---|
| `unraid user/me` | `user` | `me` | — | Current authenticated user info |
#### `key` action
| Test label | action | subaction | Extra args | Notes |
|---|---|---|---|---|
| `unraid key/list` | `key` | `list` | — | API keys list |
#### `rclone` action
| Test label | action | subaction | Extra args | Notes |
|---|---|---|---|---|
| `unraid rclone/list_remotes` | `rclone` | `list_remotes` | — | Configured rclone remotes |
| `unraid rclone/config_form` | `rclone` | `config_form` | `{"provider_type":"s3"}` | Config form for S3 provider |
`rclone/config_form` is the only smoke test that passes extra arguments — `provider_type` is
set to `"s3"` to exercise argument merging.
#### `plugin` action
| Test label | action | subaction | Extra args | Notes |
|---|---|---|---|---|
| `unraid plugin/list` | `plugin` | `list` | — | Installed Unraid plugins |
#### `customization` action
| Test label | action | subaction | Extra args | Notes |
|---|---|---|---|---|
| `unraid customization/theme` | `customization` | `theme` | — | Active UI theme |
| `unraid customization/public_theme` | `customization` | `public_theme` | — | Public-facing theme settings |
| `unraid customization/sso_enabled` | `customization` | `sso_enabled` | — | SSO enabled flag |
| `unraid customization/is_initial_setup` | `customization` | `is_initial_setup` | — | Whether initial setup is complete |
#### `oidc` action
| Test label | action | subaction | Extra args | Notes |
|---|---|---|---|---|
| `unraid oidc/providers` | `oidc` | `providers` | — | Configured OIDC providers |
| `unraid oidc/public_providers` | `oidc` | `public_providers` | — | Public OIDC provider list |
| `unraid oidc/configuration` | `oidc` | `configuration` | — | OIDC server configuration |
#### `live` action
| Test label | action | subaction | Extra args | Notes |
|---|---|---|---|---|
| `unraid live/cpu` | `live` | `cpu` | — | Real-time CPU usage |
| `unraid live/memory` | `live` | `memory` | — | Real-time memory usage |
| `unraid live/cpu_telemetry` | `live` | `cpu_telemetry` | — | Detailed CPU telemetry |
| `unraid live/notifications_overview` | `live` | `notifications_overview` | — | Live notification overview |
### 8.2 What "PASS" means for each smoke test
For all 45 subactions listed above, PASS means:
1. The MCP server returned HTTP `200`.
2. The response body does NOT have `.result.isError == true`.
PASS does NOT mean:
- The Unraid API returned useful data.
- Any specific field is present in the response.
- The response matches a schema.
These tests are "did it blow up" smoke tests, not field-level validation tests.
---
## 9. Phase 4b — Destructive Action Guards
This sub-phase tests that the `confirm=true` flag correctly bypasses the safety guard that
would otherwise tell the user to re-run the command with `confirm=true`.
The guard check logic: if the tool response body (`.result.content[0].text`) contains the
string `"re-run with confirm"` (case-insensitive), the guard did NOT accept `confirm=true`
and the test fails.
### 9.1 `notification/delete` guard bypass
```json
{
"name": "unraid",
"arguments": {
"action": "notification",
"subaction": "delete",
"confirm": true,
"notification_id": "test-guard-check-nonexistent"
}
}
```
| Field | Value |
|---|---|
| Test label | `notification/delete guard bypass` |
| `notification_id` | `"test-guard-check-nonexistent"` — deliberately nonexistent ID |
| FAIL (guard rejected) | Response text matches `/re-run with confirm/i` |
| PASS | HTTP 200 AND guard text not present (even if deletion fails due to nonexistent ID) |
| FAIL (other) | HTTP status other than 200 |
The nonexistent ID ensures no actual notification is deleted. The test only verifies that
`confirm=true` was accepted by the guard layer.
### 9.2 `vm/force_stop` guard bypass
```json
{
"name": "unraid",
"arguments": {
"action": "vm",
"subaction": "force_stop",
"confirm": true
}
}
```
| Field | Value |
|---|---|
| Test label | `vm/force_stop guard bypass` |
| Extra args | None beyond `confirm=true` |
| FAIL (guard rejected) | Response text matches `/re-run with confirm/i` |
| PASS | HTTP 200 AND guard text not present |
| FAIL (other) | HTTP status other than 200 |
No VM ID is supplied, so the actual force-stop operation is expected to fail at the API level
(no target) rather than succeed. The guard bypass test passes regardless of whether the
underlying operation succeeds.
---
## 10. Skipped Tests and Why
| Test / Section | Skip condition | Reason |
|---|---|---|
| Phase 2 (all three auth tests) | `--skip-auth` flag | OAuth gateway handles auth externally; MCP server may not enforce tokens |
| Phase 2 (all three auth tests) | No token configured | Auth appears disabled; can't meaningfully test 401 behavior |
| Phase 4 and 4b (all tool tests) | `--skip-tools` flag | No live Unraid API available; Phase 3 protocol tests remain active |
| Docker mode (all) | `docker` not in `PATH` | Docker unavailable in this environment |
| Stdio mode (all) | `uv` not in `PATH` | `uv` Python runner unavailable |
| `ping → 200` | Server returns non-200 | `ping` is optional in MCP; treated as non-fatal |
| `serverInfo.name` / `protocolVersion` | `initialize` returned non-200 | Parent test failed; child tests skipped with `"initialize failed"` |
| `unraid tool present` / `diagnose_subscriptions present` | `tools/list` returned non-200 | Parent test failed; child tests skipped with `"tools/list failed"` |
| `bearer_methods_supported` / `resource` | `/.well-known/…` returned non-200 | Parent test failed; child tests skipped with `"parent failed"` |
| Container teardown | Container already removed | Marked SKIP (not FAIL) — idempotent teardown |
**Why write operations are excluded from Phase 4:**
The script's design philosophy is "non-destructive" smoke testing. Operations that create,
modify, or delete state on the Unraid server (array operations, container start/stop, VM
create/delete, user management writes, plugin install/uninstall, etc.) are not called in
Phase 4 to avoid data loss, service disruption, or hard-to-reverse side effects in a CI or
production environment.
The only partial exception is Phase 4b, which calls two destructive subactions
(`notification/delete`, `vm/force_stop`) but does so with a nonexistent ID / no ID, ensuring
the underlying API operation cannot succeed even if the guard is bypassed.
---
## 11. Docker Mode — Full Lifecycle
Docker mode does a complete lifecycle: build image → start container → health poll →
run all four phases → tear down.
### 11.1 Prerequisites
- `docker` in `PATH` (otherwise entire docker mode is `SKIP`).
- `UNRAID_API_URL` and `UNRAID_API_KEY` env vars (defaults to dummy values if unset).
### 11.2 Build
```bash
docker build -t unraid-mcp-test <REPO_DIR>
```
| Field | Value |
|---|---|
| Image name | `unraid-mcp-test` |
| Build context | Repository root (`$REPO_DIR`) — uses `Dockerfile` at repo root |
| stdout/stderr | Suppressed (`>/dev/null 2>&1`) |
| PASS | Build exits 0 |
| FAIL | Build exits non-zero — all subsequent docker tests are skipped (early return) |
### 11.3 Container start
```bash
docker run -d \
--name unraid-mcp-test-<PID> \
-p <PORT>:6970 \
-e UNRAID_MCP_TRANSPORT=streamable-http \
-e UNRAID_MCP_BEARER_TOKEN=ci-integration-token \
-e UNRAID_MCP_DISABLE_HTTP_AUTH=false \
-e UNRAID_API_URL=<UNRAID_API_URL or http://127.0.0.1:1> \
-e UNRAID_API_KEY=<UNRAID_API_KEY or ci-fake-key> \
unraid-mcp-test
```
Key environment variables injected into the container:
| Variable | Value |
|---|---|
| `UNRAID_MCP_TRANSPORT` | `streamable-http` |
| `UNRAID_MCP_BEARER_TOKEN` | `ci-integration-token` (hardcoded test token) |
| `UNRAID_MCP_DISABLE_HTTP_AUTH` | `false` (auth is enabled) |
| `UNRAID_API_URL` | From env or dummy `http://127.0.0.1:1` |
| `UNRAID_API_KEY` | From env or dummy `ci-fake-key` |
After `docker run`, `TOKEN` is set to `"ci-integration-token"` so Phase 2 auth tests use
the correct token. `MCP_URL` is updated to `http://localhost:<PORT>/mcp`.
Container name includes the shell PID (`$$`) to avoid name collisions in parallel CI runs.
### 11.4 Health poll
```bash
# Polls up to 30 times, 1 second apart
curl -sf -H "Accept: application/json, text/event-stream" \
http://localhost:<PORT>/health
```
| Field | Value |
|---|---|
| Poll interval | 1 second |
| Max attempts | 30 (30 second timeout) |
| PASS | Server responds to `/health` within 30 seconds |
| FAIL | No healthy response after 30 seconds — last 20 lines of container logs printed |
On FAIL, the container is removed and the function returns 1 (aborting docker mode).
### 11.5 Test phases
Runs `run_phase1`, `run_phase2`, `run_phase3`, `run_phase4` against the container's endpoint.
`MCP_SESSION_ID` is reset to empty before Phase 1.
### 11.6 Teardown
```bash
docker rm -f unraid-mcp-test-<PID>
```
| Field | Value |
|---|---|
| PASS | Container removed successfully |
| SKIP | `docker rm -f` fails (container already gone — treated as idempotent) |
Teardown runs regardless of test phase outcomes (no `trap` but the teardown call is
unconditional in the function body).
---
## 12. Stdio Mode — Subprocess Protocol Handshake
Stdio mode bypasses HTTP entirely. It spawns the MCP server as a subprocess, writes JSON-RPC
requests to stdin, and reads responses from stdout.
### 12.1 Prerequisites
- `uv` in `PATH` (otherwise entire stdio mode is `SKIP`).
- `UNRAID_API_URL` and `UNRAID_API_KEY` env vars (defaults to dummy values if unset).
### 12.2 Server invocation
```bash
printf '%s\n%s\n' "$init_req" "$list_req" \
| UNRAID_MCP_TRANSPORT=stdio \
UNRAID_API_URL=<...> \
UNRAID_API_KEY=<...> \
uv run --directory <REPO_DIR> --from . unraid-mcp-server \
2>/dev/null \
| head -c 16384
```
Key details:
- Transport is set to `stdio` via `UNRAID_MCP_TRANSPORT=stdio`.
- `uv run` is used to launch the server from the repository root without a separate install step.
- The entry point is the `unraid-mcp-server` console script defined in `pyproject.toml`.
- stderr is discarded (`2>/dev/null`) — only stdout (JSON-RPC responses) is captured.
- Output is capped at 16 KiB (`head -c 16384`) to prevent runaway output.
- The subprocess exits naturally when stdin is closed (end of `printf` pipe).
### 12.3 Two requests sent
**Request 1 — `initialize`:**
```json
{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2024-11-05",
"capabilities": {},
"clientInfo": {"name": "test_live_stdio", "version": "0"}
}
}
```
**Request 2 — `tools/list`:**
```json
{"jsonrpc": "2.0", "id": 2, "method": "tools/list", "params": null}
```
### 12.4 Response parsing
The server is expected to write one JSON object per line (newline-delimited JSON). The script
parses:
- Line 1 = `initialize` response
- Line 2 = `tools/list` response
**Initialize response assertions:**
| Assertion | jq filter | PASS | FAIL |
|---|---|---|---|
| Response received | `.result.serverInfo.name \| length > 0` | Non-empty name | No response or invalid JSON |
| `serverInfo.name` logged | `jq -r '.result.serverInfo.name'` | Prints name in label | (informational — no separate pass/fail) |
Note: the `serverInfo.name` value is extracted and embedded in the PASS label string
(e.g., `"stdio: serverInfo.name = unraid-mcp"`). Both are recorded as separate PASS entries.
**`tools/list` response assertions:**
| Assertion | jq filter | PASS | FAIL |
|---|---|---|---|
| Response received with tools | `.result.tools \| length > 0` | At least 1 tool | Empty or missing |
| `unraid` tool present | `.result.tools[] \| select(.name == "unraid")` | Match found | No `unraid` tool |
The tool count is embedded in the PASS label (e.g., `"stdio: tools/list response (2 tools)"`).
### 12.5 What stdio mode does NOT test
- Auth (no bearer token in stdio mode — transport is direct)
- Phase 1 middleware endpoints (no HTTP server running)
- Phase 4 tool calls (no HTTP mode infrastructure)
- SSE response format (stdio uses plain newline-delimited JSON)
---
## 13. Output Format and Interpretation
### 13.1 Per-test lines
Each test produces one line:
```
<label padded to 62 chars> PASS (green)
<label padded to 62 chars> FAIL (red)
<label padded to 62 chars> SKIP (yellow, with reason in dim)
```
Color codes are stripped when stdout is not a TTY (e.g., CI log files).
### 13.2 Section headers
```
━━━ Phase 1 · Middleware (no auth) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
### 13.3 Summary block
```
Results: 47 passed 0 failed 3 skipped (50 total)
```
On any failures, a bullet list of failed test labels follows:
```
Failed tests:
• /health → 200 {status:ok}
• initialize → 200
```
### 13.4 Verbose mode
With `--verbose`, raw HTTP response bodies are printed in dim text after each request.
jq filter and truncated body (first 300 chars) are also printed after each failed
`assert_jq` call.
### 13.5 Interpreting results
| Result | Meaning |
|---|---|
| All PASS | Server is correctly implementing the MCP protocol and all Unraid API endpoints are reachable |
| FAIL on Phase 1 | Server not running, wrong port, or middleware misconfigured |
| FAIL on Phase 2 | Auth layer broken — token not being enforced or correct token being rejected |
| FAIL on Phase 3 | MCP protocol handler broken — `initialize` or `tools/list` not returning correct structure |
| FAIL on Phase 4 | Specific Unraid API action/subaction failing — check Unraid API connectivity and API key |
| FAIL on Phase 4b | Guard layer broken — `confirm=true` not being recognized |
| SKIP (most) | Expected in CI without live Unraid API — use `--skip-tools` |
| SKIP on Docker | Docker not available in this environment |
| SKIP on stdio | `uv` not available in this environment |
---
## 14. Internal Helper Reference
### `mcp_post METHOD [PARAMS_JSON]`
Posts a JSON-RPC 2.0 request to `MCP_URL`. Sets globals:
- `HTTP_STATUS` — curl HTTP status code string (e.g., `"200"`)
- `HTTP_BODY` — response body (SSE `data:` prefix stripped if present)
- `MCP_RESULT``.result` field from body (may be empty)
- `MCP_ERROR``.error` field from body (may be empty)
- `MCP_SESSION_ID` — updated if `Mcp-Session-Id` header present in response
### `assert_jq LABEL JSON_INPUT JQ_FILTER`
Evaluates `jq -r "$filter"` against `$body`. PASS if result is non-empty, non-null, and
non-false. FAIL otherwise.
### `call_unraid LABEL ACTION SUBACTION [EXTRA_ARGS_JSON]`
Builds and posts a `tools/call` for the `unraid` tool. PASS if HTTP 200 and `isError != true`.
Extra args JSON is merged into the `arguments` object.
### `http_get URL [extra_curl_args...]`
Simple GET request. Sets `HTTP_STATUS` and `HTTP_BODY`. No auth headers added.
### `_guard_bypass_test LABEL ACTION SUBACTION [ARGS_JSON]`
Internal function for Phase 4b. Sends `tools/call` with `confirm: true`. Checks response text
does NOT contain `"re-run with confirm"`. PASS means the guard accepted `confirm=true`.
---
## 15. Coverage Summary
| Category | Subactions tested | Destructive? |
|---|---|---|
| health | 3 (check, test_connection, diagnose) | No |
| system | 16 (overview, network, array, registration, variables, metrics, services, display, config, online, owner, settings, server, servers, flash, ups_devices) | No |
| array | 2 (parity_status, parity_history) | No |
| disk | 3 (shares, disks, log_files) | No |
| docker | 2 (list, networks) | No |
| vm | 1 (list) | No |
| notification | 3 (overview, list, recalculate) | No |
| user | 1 (me) | No |
| key | 1 (list) | No |
| rclone | 2 (list_remotes, config_form) | No |
| plugin | 1 (list) | No |
| customization | 4 (theme, public_theme, sso_enabled, is_initial_setup) | No |
| oidc | 3 (providers, public_providers, configuration) | No |
| live | 4 (cpu, memory, cpu_telemetry, notifications_overview) | No |
| **Guard bypass** | 2 (notification/delete, vm/force_stop) | Guarded (confirm=true) |
| **Total** | **48** | — |
**Not tested (by design):**
- `docker/start`, `docker/stop`, `docker/restart`, `docker/remove` — container state changes
- `vm/start`, `vm/stop`, `vm/restart`, `vm/remove` — VM state changes
- `array/start`, `array/stop`, `array/mount`, `array/unmount` — array state changes
- `notification/delete` (actual execution) — tested as guard bypass only
- `vm/force_stop` (actual execution) — tested as guard bypass only
- `user/create`, `user/delete`, `user/update` — user management writes
- `plugin/install`, `plugin/uninstall`, `plugin/update` — plugin management writes
- `key/create`, `key/delete` — API key management
- `rclone/create_remote`, `rclone/delete_remote` — rclone configuration writes
- `customization/update` — UI customization writes
- Any SSE subscription or long-polling endpoints

View file

@ -369,7 +369,6 @@ class TestDataReception:
assert "test_sub" in mgr.resource_data
assert mgr.resource_data["test_sub"].data == {"test": {"value": 42}}
assert mgr.resource_data["test_sub"].subscription_type == "test_sub"
async def test_data_message_for_legacy_protocol(self) -> None:
mgr = SubscriptionManager()
@ -756,20 +755,19 @@ class TestResourceData:
mgr.resource_data["test"] = SubscriptionData(
data={"key": "value"},
last_updated=datetime.now(UTC),
subscription_type="test",
)
result = await mgr.get_resource_data("test")
assert result == {"key": "value"}
def test_list_active_subscriptions_empty(self) -> None:
async def test_list_active_subscriptions_empty(self) -> None:
mgr = SubscriptionManager()
assert mgr.list_active_subscriptions() == []
assert await mgr.list_active_subscriptions() == []
def test_list_active_subscriptions_returns_names(self) -> None:
async def test_list_active_subscriptions_returns_names(self) -> None:
mgr = SubscriptionManager()
mgr.active_subscriptions["sub_a"] = MagicMock()
mgr.active_subscriptions["sub_b"] = MagicMock()
result = mgr.list_active_subscriptions()
result = await mgr.list_active_subscriptions()
assert sorted(result) == ["sub_a", "sub_b"]
@ -804,7 +802,6 @@ class TestSubscriptionStatus:
mgr.resource_data["logFileSubscription"] = SubscriptionData(
data={"log": "content"},
last_updated=datetime.now(UTC),
subscription_type="logFileSubscription",
)
status = await mgr.get_subscription_status()
assert status["logFileSubscription"]["data"]["available"] is True

View file

@ -217,6 +217,8 @@ class TestInfoQueries:
"ups_devices",
"ups_device",
"ups_config",
"server_time",
"timezones",
}
assert set(QUERIES.keys()) == expected_actions
@ -470,16 +472,17 @@ class TestVmQueries:
errors = _validate_operation(schema, QUERIES["list"])
assert not errors, f"list query validation failed: {errors}"
def test_details_query(self, schema: GraphQLSchema) -> None:
from unraid_mcp.tools._vm import _VM_QUERIES as QUERIES
def test_details_uses_list_query(self, schema: GraphQLSchema) -> None:
"""details reuses the list query — VmDomain has no richer fields."""
from unraid_mcp.tools._vm import _VM_LIST_QUERY
errors = _validate_operation(schema, QUERIES["details"])
assert not errors, f"details query validation failed: {errors}"
errors = _validate_operation(schema, _VM_LIST_QUERY)
assert not errors, f"list query (used by details) validation failed: {errors}"
def test_all_vm_queries_covered(self, schema: GraphQLSchema) -> None:
from unraid_mcp.tools._vm import _VM_QUERIES as QUERIES
assert set(QUERIES.keys()) == {"list", "details"}
assert set(QUERIES.keys()) == {"list"}
class TestVmMutations:
@ -626,6 +629,7 @@ class TestNotificationMutations:
expected = {
"create",
"notify_if_unique",
"archive",
"mark_unread",
"delete",
@ -723,7 +727,7 @@ class TestKeysQueries:
def test_all_keys_queries_covered(self, schema: GraphQLSchema) -> None:
from unraid_mcp.tools._key import _KEY_QUERIES as QUERIES
assert set(QUERIES.keys()) == {"list", "get"}
assert set(QUERIES.keys()) == {"list", "get", "possible_roles"}
class TestKeysMutations:

View file

@ -137,26 +137,16 @@ def test_collect_actions_all_handled():
If this test fails, a new key was added to COLLECT_ACTIONS in
subscriptions/queries.py without adding a corresponding if-branch in
tools/_live.py which would cause a ToolError('this is a bug') at runtime.
Fix: add an if-branch in _handle_live AND add the key to
_HANDLED_COLLECT_SUBACTIONS.
Fix: add an if-branch in _handle_live for the new key.
"""
from unraid_mcp.subscriptions.queries import COLLECT_ACTIONS
from unraid_mcp.tools._live import _HANDLED_COLLECT_SUBACTIONS
import inspect
unhandled = set(COLLECT_ACTIONS) - _HANDLED_COLLECT_SUBACTIONS
from unraid_mcp.subscriptions.queries import COLLECT_ACTIONS
from unraid_mcp.tools._live import _handle_live
source = inspect.getsource(_handle_live)
unhandled = {key for key in COLLECT_ACTIONS if f'"{key}"' not in source}
assert not unhandled, (
f"COLLECT_ACTIONS keys without handlers in _handle_live: {unhandled}. "
"Add an if-branch in unraid_mcp/tools/_live.py and update _HANDLED_COLLECT_SUBACTIONS."
"Add an if-branch in unraid_mcp/tools/_live.py."
)
def test_collect_actions_rejects_stale_handled_keys(monkeypatch):
import unraid_mcp.tools._live as live_module
monkeypatch.setattr(
live_module,
"_HANDLED_COLLECT_SUBACTIONS",
frozenset({"log_tail", "notification_feed", "stale_key"}),
)
with pytest.raises(RuntimeError, match="stale"):
live_module._assert_collect_subactions_complete()

View file

@ -43,12 +43,15 @@ class TestLiveResourcesUseManagerCache:
@pytest.mark.usefixtures("_mock_ensure_started")
async def test_resource_returns_cached_data(self, action: str) -> None:
cached = {"systemMetricsCpu": {"percentTotal": 12.5}}
ts = "2026-04-04T12:00:00+00:00"
with patch("unraid_mcp.subscriptions.resources.subscription_manager") as mock_mgr:
mock_mgr.get_resource_data = AsyncMock(return_value=cached)
mock_mgr.get_resource_data_with_timestamp = AsyncMock(return_value=(cached, ts))
mcp = _make_resources()
resource = _get_resource(mcp, f"unraid://live/{action}")
result = await resource.fn()
assert json.loads(result) == cached
parsed = json.loads(result)
assert parsed["_fetched_at"] == ts
assert parsed["systemMetricsCpu"] == cached["systemMetricsCpu"]
@pytest.mark.parametrize("action", list(SNAPSHOT_ACTIONS.keys()))
@pytest.mark.usefixtures("_mock_ensure_started")
@ -56,7 +59,7 @@ class TestLiveResourcesUseManagerCache:
self, action: str
) -> None:
with patch("unraid_mcp.subscriptions.resources.subscription_manager") as mock_mgr:
mock_mgr.get_resource_data = AsyncMock(return_value=None)
mock_mgr.get_resource_data_with_timestamp = AsyncMock(return_value=None)
mock_mgr.get_error_state = AsyncMock(return_value=(None, ""))
mock_mgr.auto_start_enabled = True
mcp = _make_resources()
@ -69,7 +72,7 @@ class TestLiveResourcesUseManagerCache:
@pytest.mark.usefixtures("_mock_ensure_started")
async def test_resource_returns_error_status_on_permanent_failure(self, action: str) -> None:
with patch("unraid_mcp.subscriptions.resources.subscription_manager") as mock_mgr:
mock_mgr.get_resource_data = AsyncMock(return_value=None)
mock_mgr.get_resource_data_with_timestamp = AsyncMock(return_value=None)
mock_mgr.get_error_state = AsyncMock(
return_value=("WebSocket auth failed", "auth_failed")
)
@ -107,7 +110,7 @@ class TestLogsStreamResource:
@pytest.mark.usefixtures("_mock_ensure_started")
async def test_logs_stream_no_data(self) -> None:
with patch("unraid_mcp.subscriptions.resources.subscription_manager") as mock_mgr:
mock_mgr.get_resource_data = AsyncMock(return_value=None)
mock_mgr.get_resource_data_with_timestamp = AsyncMock(return_value=None)
mcp = _make_resources()
resource = _get_resource(mcp, "unraid://logs/stream")
result = await resource.fn()
@ -116,13 +119,15 @@ class TestLogsStreamResource:
@pytest.mark.usefixtures("_mock_ensure_started")
async def test_logs_stream_returns_data_with_empty_dict(self) -> None:
"""Empty dict cache hit must return data, not 'connecting' status."""
"""Empty dict cache hit must return data with _fetched_at timestamp."""
ts = "2026-04-04T12:00:00+00:00"
with patch("unraid_mcp.subscriptions.resources.subscription_manager") as mock_mgr:
mock_mgr.get_resource_data = AsyncMock(return_value={})
mock_mgr.get_resource_data_with_timestamp = AsyncMock(return_value=({}, ts))
mcp = _make_resources()
resource = _get_resource(mcp, "unraid://logs/stream")
result = await resource.fn()
assert json.loads(result) == {}
parsed = json.loads(result)
assert parsed["_fetched_at"] == ts
class TestAutoStartDisabledFallback:
@ -139,7 +144,7 @@ class TestAutoStartDisabledFallback:
new=AsyncMock(return_value=fallback_data),
),
):
mock_mgr.get_resource_data = AsyncMock(return_value=None)
mock_mgr.get_resource_data_with_timestamp = AsyncMock(return_value=None)
mock_mgr.get_error_state = AsyncMock(return_value=(None, ""))
mock_mgr.auto_start_enabled = False
mcp = _make_resources()
@ -158,7 +163,7 @@ class TestAutoStartDisabledFallback:
new=AsyncMock(side_effect=Exception("WebSocket failed")),
),
):
mock_mgr.get_resource_data = AsyncMock(return_value=None)
mock_mgr.get_resource_data_with_timestamp = AsyncMock(return_value=None)
mock_mgr.get_error_state = AsyncMock(return_value=(None, ""))
mock_mgr.auto_start_enabled = False
mcp = _make_resources()

View file

@ -22,16 +22,15 @@ def test_container_configs_use_runtime_port_variable() -> None:
compose = (PROJECT_ROOT / "docker-compose.yaml").read_text()
dockerfile = (PROJECT_ROOT / "Dockerfile").read_text()
assert "${UNRAID_MCP_PORT:-6970}:${UNRAID_MCP_PORT:-6970}" in compose
assert "os.getenv('UNRAID_MCP_PORT', '6970')" in compose
assert "os.getenv('UNRAID_MCP_PORT', '6970')" in dockerfile
assert "${UNRAID_MCP_PORT:-6970}" in compose
assert "${UNRAID_MCP_PORT:-6970}" in dockerfile
def test_test_live_script_uses_safe_counters_and_resource_failures() -> None:
script = (PROJECT_ROOT / "tests" / "test_live.sh").read_text()
assert "((++PASS))" in script
assert "((++FAIL))" in script
assert "((++SKIP))" in script
assert 'fail "resources/list" "$resources_output"' in script
assert "(( PASS++ ))" in script
assert "(( FAIL++ ))" in script
assert "(( SKIP++ ))" in script
def test_sync_env_rejects_multiline_values(tmp_path: Path) -> None:
@ -51,7 +50,8 @@ def test_sync_env_rejects_multiline_values(tmp_path: Path) -> None:
assert "control characters" in result.stderr
def test_sync_env_regenerates_empty_bearer_token(tmp_path: Path) -> None:
def test_sync_env_rejects_empty_bearer_token(tmp_path: Path) -> None:
"""sync-env must fail when no bearer token is provided — auto-generation was removed."""
env_file = tmp_path / ".env"
env_file.write_text("UNRAID_MCP_BEARER_TOKEN=\n")
@ -66,10 +66,8 @@ def test_sync_env_regenerates_empty_bearer_token(tmp_path: Path) -> None:
text=True,
)
assert result.returncode == 0
lines = env_file.read_text().splitlines()
token_line = next(line for line in lines if line.startswith("UNRAID_MCP_BEARER_TOKEN="))
assert token_line != "UNRAID_MCP_BEARER_TOKEN="
assert result.returncode != 0
assert "UNRAID_MCP_BEARER_TOKEN is not set" in result.stderr
def test_ensure_gitignore_preserves_ignore_before_negation(tmp_path: Path) -> None:

View file

@ -51,27 +51,16 @@ def test_settings_apply_runtime_config_updates_module_globals():
original_url = settings.UNRAID_API_URL
original_key = settings.UNRAID_API_KEY
original_env_url = os.environ.get("UNRAID_API_URL")
original_env_key = os.environ.get("UNRAID_API_KEY")
try:
settings.apply_runtime_config("https://newurl.com/graphql", "newkey")
assert settings.UNRAID_API_URL == "https://newurl.com/graphql"
assert settings.UNRAID_API_KEY == "newkey"
assert os.environ["UNRAID_API_URL"] == "https://newurl.com/graphql"
assert os.environ["UNRAID_API_KEY"] == "newkey"
# Credentials must NOT leak to os.environ (security fix: unraid-mcp-cbc)
assert os.environ.get("UNRAID_API_URL") != "https://newurl.com/graphql"
assert os.environ.get("UNRAID_API_KEY") != "newkey"
finally:
# Reset module globals
settings.UNRAID_API_URL = original_url
settings.UNRAID_API_KEY = original_key
# Reset os.environ
if original_env_url is None:
os.environ.pop("UNRAID_API_URL", None)
else:
os.environ["UNRAID_API_URL"] = original_env_url
if original_env_key is None:
os.environ.pop("UNRAID_API_KEY", None)
else:
os.environ["UNRAID_API_KEY"] = original_env_key
def test_run_server_does_not_exit_when_creds_missing(monkeypatch):

View file

@ -31,8 +31,8 @@ class TestValidateSubscriptionQueryAllowed:
assert _validate_subscription_query(query) == "memory"
def test_multiline_query_accepted(self) -> None:
query = "subscription {\n logFile {\n content\n }\n}"
assert _validate_subscription_query(query) == "logFile"
query = "subscription {\n cpu {\n used\n }\n}"
assert _validate_subscription_query(query) == "cpu"
def test_case_insensitive_subscription_keyword(self) -> None:
"""'SUBSCRIPTION' should be accepted (regex uses IGNORECASE)."""
@ -124,6 +124,12 @@ class TestValidateSubscriptionQueryUnknownName:
with pytest.raises(ToolError, match="not allowed"):
_validate_subscription_query(query)
def test_logfile_rejected_security(self) -> None:
"""logFile allows arbitrary file reads via path argument — must be blocked."""
query = 'subscription { logFile(path: "/etc/shadow") { content } }'
with pytest.raises(ToolError, match="not allowed"):
_validate_subscription_query(query)
def test_close_but_not_whitelisted_rejected(self) -> None:
"""'cpuSubscription' (old operation-style name) is not in the field allow-list."""
query = "subscription { cpuSubscription { usage } }"

View file

@ -7,17 +7,10 @@ that cap at 10MB and start over (no rotation) for consistent use across all modu
import logging
from pathlib import Path
from fastmcp.utilities.logging import get_logger as get_fastmcp_logger
from rich.console import Console
from rich.logging import RichHandler
try:
from fastmcp.utilities.logging import get_logger as get_fastmcp_logger
FASTMCP_AVAILABLE = True
except ImportError:
FASTMCP_AVAILABLE = False
from .settings import LOG_FILE_PATH, LOG_LEVEL_STR
@ -55,27 +48,28 @@ class OverwriteFileHandler(logging.FileHandler):
base_path = Path(self.baseFilename)
file_size = base_path.stat().st_size if base_path.exists() else 0
if file_size >= self.max_bytes:
old_stream = self.stream
self.stream = None
# Open new file FIRST, then swap — self.stream is never None.
try:
old_stream.close()
base_path.unlink(missing_ok=True)
self.stream = self._open()
new_stream = self._open()
except OSError:
# Recovery: attempt to reopen even if unlink failed
try:
self.stream = self._open()
new_stream = self._open()
except OSError:
# old_stream is already closed — do NOT restore it.
# Leave self.stream = None so super().emit() skips output
# rather than writing to a closed file descriptor.
import sys
print(
"WARNING: Failed to reopen log file after rotation. "
"File logging suspended until next successful open.",
"File logging continues on old stream.",
file=sys.stderr,
)
new_stream = None
if new_stream is not None:
old_stream = self.stream
self.stream = new_stream # atomic swap
if old_stream is not None:
old_stream.close()
if self.stream is not None:
reset_record = logging.LogRecord(
@ -161,11 +155,8 @@ def setup_logger(name: str = "UnraidMCPServer") -> logging.Logger:
return logger
def configure_fastmcp_logger_with_rich() -> logging.Logger | None:
def configure_fastmcp_logger_with_rich() -> logging.Logger:
"""Configure FastMCP logger to use Rich formatting with Nordic colors."""
if not FASTMCP_AVAILABLE:
return None
# Get numeric log level
numeric_log_level = getattr(logging, LOG_LEVEL_STR, logging.INFO)
@ -238,10 +229,4 @@ def log_configuration_status(logger: logging.Logger) -> None:
# Global logger instance - modules can import this directly
if FASTMCP_AVAILABLE:
# Use FastMCP logger with Rich formatting
_fastmcp_logger = configure_fastmcp_logger_with_rich()
logger = _fastmcp_logger if _fastmcp_logger is not None else setup_logger()
else:
# Fallback to our custom logger if FastMCP is not available
logger = setup_logger()
logger = configure_fastmcp_logger_with_rich()

View file

@ -41,6 +41,10 @@ for dotenv_path in dotenv_paths:
break
# Core API Configuration
# IMPORTANT: UNRAID_API_URL and UNRAID_API_KEY are mutated at runtime by apply_runtime_config().
# Never import these names at module level in code that runs after startup.
# Instead, use a local import: from ..config import settings as _settings; _settings.UNRAID_API_URL
# Or call get_api_credentials() which always returns the current values.
UNRAID_API_URL = os.getenv("UNRAID_API_URL")
UNRAID_API_KEY = os.getenv("UNRAID_API_KEY")
@ -128,14 +132,19 @@ def is_configured() -> bool:
def apply_runtime_config(api_url: str, api_key: str) -> None:
"""Update module-level credential globals at runtime (post-elicitation).
Also sets matching environment variables so submodules that read
os.getenv() after import see the new values.
Credentials are intentionally NOT written to os.environ to prevent
leaking secrets to subprocesses or error reporters that capture
environment snapshots. All internal consumers read from this module's
globals via ``from ..config import settings as _settings``.
"""
global UNRAID_API_URL, UNRAID_API_KEY
UNRAID_API_URL = api_url
UNRAID_API_KEY = api_key
os.environ["UNRAID_API_URL"] = api_url
os.environ["UNRAID_API_KEY"] = api_key
def get_api_credentials() -> tuple[str | None, str | None]:
"""Return current (UNRAID_API_URL, UNRAID_API_KEY) — safe to call after apply_runtime_config."""
return UNRAID_API_URL, UNRAID_API_KEY
def apply_bearer_token(token: str) -> None:

View file

@ -23,6 +23,7 @@ from __future__ import annotations
import hmac
import json
import posixpath
import re
import time
from collections import deque
@ -42,6 +43,9 @@ _RATE_MAX_FAILURES = 60
# Log throttle: emit at most one warning per IP per this many seconds
_LOG_THROTTLE_SECS = 30.0
# Maximum number of unique IPs to track — prevents memory exhaustion DoS
_MAX_IP_TRACKING = 10_000
class BearerAuthMiddleware:
"""ASGI middleware enforcing bearer token auth on HTTP requests.
@ -172,6 +176,14 @@ class BearerAuthMiddleware:
def _record_failure(self, ip: str) -> None:
"""Record one failed auth attempt for this IP."""
self._prune_ip_state(ip)
# Evict oldest-activity IP when tracking dict is full
if ip not in self._ip_failures and len(self._ip_failures) >= _MAX_IP_TRACKING:
oldest_ip = min(
self._ip_failures,
key=lambda k: self._ip_failures[k][0] if self._ip_failures[k] else 0,
)
del self._ip_failures[oldest_ip]
self._ip_last_warn.pop(oldest_ip, None)
if ip not in self._ip_failures:
self._ip_failures[ip] = deque()
self._ip_failures[ip].append(time.monotonic())
@ -242,7 +254,12 @@ class HealthMiddleware:
async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
# scope["path"] is always present in ASGI HTTP scopes (required field).
if scope["type"] == "http" and scope["path"] == "/health" and scope["method"] == "GET":
# Normalize to prevent bypasses like "/health/" or "//health".
if (
scope["type"] == "http"
and posixpath.normpath(scope["path"]) == "/health"
and scope["method"] == "GET"
):
await send({"type": "http.response.start", "status": 200, "headers": self._HEADERS})
await send({"type": "http.response.body", "body": self._BODY, "more_body": False})
return
@ -282,7 +299,7 @@ class WellKnownMiddleware:
await self.app(scope, receive, send)
return
path: str = scope.get("path", "")
path: str = posixpath.normpath(scope.get("path", ""))
method: str = scope.get("method", "").upper()
if method != "GET" or path not in self._WELL_KNOWN_PATHS:

View file

@ -24,27 +24,34 @@ from .utils import safe_display_url
# Sensitive keys to redact from debug logs (frozenset — immutable, Final — no accidental reassignment)
_SENSITIVE_KEYS: Final[frozenset[str]] = frozenset(
# Exact-match keys: short words that over-redact when used as substrings
# (e.g. "key" would match "keyFile", "monkey", "turkey")
_EXACT_SENSITIVE_KEYS: Final[frozenset[str]] = frozenset({"key", "pin"})
# Substring-match keys: compound terms safe for substring matching
_SUBSTRING_SENSITIVE_KEYS: Final[frozenset[str]] = frozenset(
{
"password",
"key",
"secret",
"token",
"apikey",
"authorization",
"cookie",
"session",
"credential",
"passphrase",
"jwt",
"cookie",
"session",
}
)
def _is_sensitive_key(key: str) -> bool:
"""Check if a key name contains any sensitive substring."""
key_lower = key.lower()
return any(s in key_lower for s in _SENSITIVE_KEYS)
"""Check if a key name is sensitive (exact match or substring match).
Normalizes by stripping underscores/hyphens so "api_key_value" matches "apikey".
"""
k = key.lower()
k_normalized = k.replace("_", "").replace("-", "")
return k in _EXACT_SENSITIVE_KEYS or any(s in k_normalized for s in _SUBSTRING_SENSITIVE_KEYS)
def redact_sensitive(obj: Any) -> Any:

View file

@ -7,9 +7,11 @@ them to ~/.unraid-mcp/.env with restricted permissions.
from __future__ import annotations
from dataclasses import dataclass
import os
from typing import TYPE_CHECKING
from pydantic import BaseModel, Field
if TYPE_CHECKING:
from fastmcp import Context
@ -23,10 +25,11 @@ from ..config.settings import (
)
@dataclass
class _UnraidCredentials:
api_url: str
api_key: str
class _UnraidCredentials(BaseModel):
"""Credentials model for MCP elicitation form rendering."""
api_url: str = Field(..., description="Unraid GraphQL endpoint URL")
api_key: str = Field(..., description="Unraid API key")
async def elicit_reset_confirmation(ctx: Context | None, current_url: str) -> bool:
@ -161,6 +164,15 @@ def _write_env(api_url: str, api_key: str) -> None:
if not key_written:
new_lines.append(f"UNRAID_API_KEY={api_key}")
CREDENTIALS_ENV_PATH.write_text("\n".join(new_lines) + "\n")
CREDENTIALS_ENV_PATH.chmod(0o600)
# Atomic write: write to tmp file, set permissions, then rename into place.
# os.replace is atomic on POSIX — prevents a crash from leaving a partial .env.
tmp_path = CREDENTIALS_ENV_PATH.with_suffix(".tmp")
try:
tmp_path.write_text("\n".join(new_lines) + "\n")
tmp_path.chmod(0o600)
os.replace(tmp_path, CREDENTIALS_ENV_PATH)
finally:
# Clean up tmp on failure (may not exist if os.replace succeeded)
if tmp_path.exists():
tmp_path.unlink()
logger.info("Credentials written to %s (mode 600)", CREDENTIALS_ENV_PATH)

View file

@ -18,10 +18,7 @@ class SubscriptionData:
data: dict[str, Any]
last_updated: datetime # Must be timezone-aware (UTC)
subscription_type: str
def __post_init__(self) -> None:
if self.last_updated.tzinfo is None:
raise ValueError("last_updated must be timezone-aware; use datetime.now(UTC)")
if not self.subscription_type.strip():
raise ValueError("subscription_type must be a non-empty string")

View file

@ -92,3 +92,19 @@ def format_kb(k: Any) -> str:
except (ValueError, TypeError):
return "N/A"
return format_bytes(kb * 1024)
def validate_subaction(subaction: str, valid_set: set[str], domain: str) -> None:
"""Raise ToolError if subaction is not in the valid set.
Args:
subaction: The subaction string to validate.
valid_set: Set of valid subaction names.
domain: The domain name for error messages (e.g. "docker").
"""
from .exceptions import ToolError
if subaction not in valid_set:
raise ToolError(
f"Invalid subaction '{subaction}' for {domain}. Must be one of: {sorted(valid_set)}"
)

View file

@ -5,6 +5,9 @@ _rclone.py, _setting.py) so they share a single source of truth.
"""
import re
from typing import Any
from .exceptions import ToolError
# Maximum string length for individual config/settings values
@ -22,3 +25,47 @@ MAX_VALUE_LENGTH: int = 4096
DANGEROUS_KEY_PATTERN: re.Pattern[str] = re.compile(
r"\.\.|[/\\;|`$(){}&<>\"'#\x20\x7f]|[\x00-\x1f]"
)
def validate_scalar_mapping(
data: dict[str, Any],
label: str,
*,
max_keys: int = 100,
stringify: bool = False,
) -> dict[str, Any]:
"""Validate a flat scalar key-value mapping.
Enforces key count cap, rejects dangerous key names, accepts only scalar
values (str, int, float, bool), and enforces MAX_VALUE_LENGTH.
Args:
data: The mapping to validate.
label: Human-readable label for error messages (e.g. "config_data").
max_keys: Maximum number of keys allowed.
stringify: If True, convert all values to str (rclone style).
If False, preserve original types (settings style).
Returns:
Validated mapping with the same or stringified values.
"""
if len(data) > max_keys:
raise ToolError(f"{label} has {len(data)} keys (max {max_keys})")
validated: dict[str, Any] = {}
for key, value in data.items():
if not isinstance(key, str) or not key.strip():
raise ToolError(f"{label} keys must be non-empty strings, got: {type(key).__name__}")
if DANGEROUS_KEY_PATTERN.search(key):
raise ToolError(f"{label} key '{key}' contains disallowed characters")
if not isinstance(value, (str, int, float, bool)):
raise ToolError(
f"{label}['{key}'] must be a string, number, or boolean"
+ (f", got: {type(value).__name__}" if not stringify else "")
)
str_value = str(value)
if len(str_value) > MAX_VALUE_LENGTH:
raise ToolError(
f"{label}['{key}'] value exceeds max length ({len(str_value)} > {MAX_VALUE_LENGTH})"
)
validated[key] = str_value if stringify else value
return validated

View file

@ -152,14 +152,16 @@ def ensure_token_exists() -> None:
CREDENTIALS_DIR.mkdir(parents=True, exist_ok=True)
_chmod_safe(CREDENTIALS_DIR, 0o700, strict=True)
# Touch the file first so set_key has a target (no-op if already exists)
# Touch the file and restrict permissions BEFORE writing the token.
# This closes the window where the file has default umask permissions.
if not CREDENTIALS_ENV_PATH.exists():
CREDENTIALS_ENV_PATH.touch()
# In-place .env write — preserves comments and existing keys
set_key(str(CREDENTIALS_ENV_PATH), "UNRAID_MCP_BEARER_TOKEN", token, quote_mode="auto")
CREDENTIALS_ENV_PATH.touch(mode=0o600)
_chmod_safe(CREDENTIALS_ENV_PATH, 0o600, strict=True)
# In-place .env write — preserves comments and existing keys.
# File is already 0o600 so the token is never world-readable.
set_key(str(CREDENTIALS_ENV_PATH), "UNRAID_MCP_BEARER_TOKEN", token, quote_mode="auto")
print(
f"\n[unraid-mcp] Generated HTTP bearer token and saved it to {CREDENTIALS_ENV_PATH}.\n"
"Configure your MCP client to send Authorization: Bearer <token> using that stored value.\n",

View file

@ -35,13 +35,14 @@ from .utils import (
# NOT the operation-level names (e.g. "logFileSubscription").
_ALLOWED_SUBSCRIPTION_FIELDS = frozenset(
{
"logFile",
"containerStats",
"cpu",
"dockerContainerStats",
"memory",
"array",
"network",
"docker",
"systemMetricsTemperature",
"vm",
}
)
@ -72,7 +73,7 @@ def _validate_subscription_query(query: str) -> str:
if not match:
raise ToolError(
"Query rejected: must start with 'subscription' and contain a valid "
'subscription field. Example: subscription { logFile(path: "/var/log/syslog") { content } }'
"subscription field. Example: subscription { cpu { used idle system } }"
)
field_name = match.group(1)
@ -97,7 +98,7 @@ def register_diagnostic_tools(mcp: FastMCP) -> None:
"""Test a GraphQL subscription query directly to debug schema issues.
Use this to find working subscription field names and structure.
Only whitelisted schema fields are permitted (logFile, containerStats,
Only whitelisted schema fields are permitted (containerStats,
cpu, memory, array, network, docker, vm).
Args:

View file

@ -29,17 +29,6 @@ _MAX_RESOURCE_DATA_LINES = 5_000
# Minimum stable connection duration (seconds) before resetting reconnect counter
_STABLE_CONNECTION_SECONDS = 30
# Track last GraphQL error per subscription to deduplicate log spam.
# Key: subscription name, Value: first error message seen in the current burst.
_last_graphql_error: dict[str, str] = {}
_graphql_error_count: dict[str, int] = {}
def _clear_graphql_error_burst(subscription_name: str) -> None:
"""Reset deduplicated GraphQL error tracking for one subscription."""
_last_graphql_error.pop(subscription_name, None)
_graphql_error_count.pop(subscription_name, None)
def _preview(message: str | bytes, n: int = 200) -> str:
"""Return the first *n* characters of *message* as a UTF-8 string.
@ -126,6 +115,10 @@ class SubscriptionManager:
self.connection_states: dict[str, str] = {} # Track connection state per subscription
self.last_error: dict[str, str] = {} # Track last error per subscription
self._connection_start_times: dict[str, float] = {} # Track when connections started
# Track last GraphQL error per subscription to deduplicate log spam.
# Key: subscription name, Value: first error message seen in the current burst.
self._last_graphql_error: dict[str, str] = {}
self._graphql_error_count: dict[str, int] = {}
# Define subscription configurations
from .queries import SNAPSHOT_ACTIONS
@ -161,6 +154,22 @@ class SubscriptionManager:
f"[SUBSCRIPTION_MANAGER] Available subscriptions: {list(self.subscription_configs.keys())}"
)
def _clear_graphql_error_burst(self, subscription_name: str) -> None:
"""Reset deduplicated GraphQL error tracking for one subscription."""
self._last_graphql_error.pop(subscription_name, None)
self._graphql_error_count.pop(subscription_name, None)
def _set_connection_state(self, name: str, state: str, error: str | None = None) -> None:
"""Atomically update connection state and optionally last_error.
Must NOT contain any ``await`` paired writes rely on asyncio
cooperative scheduling to stay consistent (no interleaving at
non-await points).
"""
self.connection_states[name] = state
if error is not None:
self.last_error[name] = error
async def auto_start_all_subscriptions(self) -> None:
"""Auto-start all subscriptions marked for auto-start.
@ -233,7 +242,7 @@ class SubscriptionManager:
raise ValueError(
f"subscription_name must contain only [a-zA-Z0-9_], got: {subscription_name!r}"
)
_clear_graphql_error_burst(subscription_name)
self._clear_graphql_error_burst(subscription_name)
logger.info(f"[SUBSCRIPTION:{subscription_name}] Starting subscription...")
# Guard must be inside the lock to prevent a TOCTOU race where two
@ -248,7 +257,7 @@ class SubscriptionManager:
# Reset connection tracking inside the lock so state is consistent
# with the task creation that follows immediately.
self.reconnect_attempts[subscription_name] = 0
self.connection_states[subscription_name] = "starting"
self._set_connection_state(subscription_name, "starting")
self._connection_start_times.pop(subscription_name, None)
try:
@ -259,13 +268,12 @@ class SubscriptionManager:
logger.info(
f"[SUBSCRIPTION:{subscription_name}] Subscription task created and started"
)
self.connection_states[subscription_name] = "active"
self._set_connection_state(subscription_name, "active")
except Exception as e:
logger.error(
f"[SUBSCRIPTION:{subscription_name}] Failed to start subscription task: {e}"
)
self.connection_states[subscription_name] = "failed"
self.last_error[subscription_name] = str(e)
self._set_connection_state(subscription_name, "failed", str(e))
raise
async def stop_subscription(self, subscription_name: str) -> None:
@ -283,7 +291,7 @@ class SubscriptionManager:
if task is None:
logger.warning(f"[SUBSCRIPTION:{subscription_name}] No active subscription to stop")
return
self.connection_states[subscription_name] = "stopped"
self._set_connection_state(subscription_name, "stopped")
self._connection_start_times.pop(subscription_name, None)
# Await cancellation OUTSIDE the lock — _subscription_loop cleanup path
@ -294,8 +302,8 @@ class SubscriptionManager:
await task
except asyncio.CancelledError:
logger.debug(f"[SUBSCRIPTION:{subscription_name}] Task cancelled successfully")
self.connection_states[subscription_name] = "stopped"
_clear_graphql_error_burst(subscription_name)
self._set_connection_state(subscription_name, "stopped")
self._clear_graphql_error_burst(subscription_name)
logger.info(f"[SUBSCRIPTION:{subscription_name}] Subscription stopped")
async def stop_all(self) -> None:
@ -328,7 +336,7 @@ class SubscriptionManager:
logger.error(
f"[WEBSOCKET:{subscription_name}] Max reconnection attempts ({self.max_reconnect_attempts}) exceeded, stopping"
)
self.connection_states[subscription_name] = "max_retries_exceeded"
self._set_connection_state(subscription_name, "max_retries_exceeded")
break
try:
@ -359,7 +367,7 @@ class SubscriptionManager:
logger.info(
f"[WEBSOCKET:{subscription_name}] Connected! Protocol: {selected_proto}"
)
self.connection_states[subscription_name] = "connected"
self._set_connection_state(subscription_name, "connected")
# Track connection start time — only reset retry counter
# after the connection proves stable (>30s connected)
@ -410,16 +418,17 @@ class SubscriptionManager:
logger.info(
f"[PROTOCOL:{subscription_name}] Connection acknowledged successfully"
)
self.connection_states[subscription_name] = "authenticated"
self._set_connection_state(subscription_name, "authenticated")
elif init_data.get("type") == "connection_error":
error_payload = init_data.get("payload", {})
logger.error(
f"[AUTH:{subscription_name}] Authentication failed: {error_payload}"
)
self.last_error[subscription_name] = (
f"Authentication error: {error_payload}"
self._set_connection_state(
subscription_name,
"auth_failed",
f"Authentication error: {error_payload}",
)
self.connection_states[subscription_name] = "auth_failed"
break
else:
logger.warning(
@ -452,7 +461,7 @@ class SubscriptionManager:
logger.info(
f"[SUBSCRIPTION:{subscription_name}] Subscription started successfully"
)
self.connection_states[subscription_name] = "subscribed"
self._set_connection_state(subscription_name, "subscribed")
# Listen for subscription data
message_count = 0
@ -464,7 +473,10 @@ class SubscriptionManager:
message_type = data.get("type", "unknown")
logger.debug(
f"[DATA:{subscription_name}] Message #{message_count}: {message_type}"
"[DATA:%s] Message #%d: %s",
subscription_name,
message_count,
message_type,
)
# Handle different message types
@ -480,9 +492,10 @@ class SubscriptionManager:
if payload.get("data"):
logger.info(
f"[DATA:{subscription_name}] Received subscription data update"
"[DATA:%s] Received subscription data update",
subscription_name,
)
_clear_graphql_error_burst(subscription_name)
self._clear_graphql_error_burst(subscription_name)
capped_data = (
_cap_log_content(payload["data"])
if isinstance(payload["data"], dict)
@ -492,22 +505,22 @@ class SubscriptionManager:
new_entry = SubscriptionData(
data=capped_data,
last_updated=datetime.now(UTC),
subscription_type=subscription_name,
)
async with self._data_lock:
self.resource_data[subscription_name] = new_entry
logger.debug(
f"[RESOURCE:{subscription_name}] Resource data updated successfully"
"[RESOURCE:%s] Resource data updated successfully",
subscription_name,
)
elif payload.get("errors"):
err_msg = str(payload["errors"])
prev = _last_graphql_error.get(subscription_name)
count = _graphql_error_count.get(subscription_name, 0) + 1
_graphql_error_count[subscription_name] = count
prev = self._last_graphql_error.get(subscription_name)
count = self._graphql_error_count.get(subscription_name, 0) + 1
self._graphql_error_count[subscription_name] = count
if prev != err_msg:
# First occurrence of this error — log as warning
_last_graphql_error[subscription_name] = err_msg
_graphql_error_count[subscription_name] = 1
self._last_graphql_error[subscription_name] = err_msg
self._graphql_error_count[subscription_name] = 1
logger.warning(
"[DATA:%s] GraphQL error (will suppress repeats): %s",
subscription_name,
@ -532,47 +545,64 @@ class SubscriptionManager:
)
else:
logger.warning(
f"[DATA:{subscription_name}] Empty or invalid data payload: {payload}"
"[DATA:%s] Empty or invalid data payload: %s",
subscription_name,
payload,
)
elif data.get("type") == "ping":
logger.debug(
f"[PROTOCOL:{subscription_name}] Received ping, sending pong"
"[PROTOCOL:%s] Received ping, sending pong",
subscription_name,
)
await websocket.send(json.dumps({"type": "pong"}))
elif data.get("type") == "error":
error_payload = data.get("payload", {})
logger.error(
f"[SUBSCRIPTION:{subscription_name}] Subscription error: {error_payload}"
"[SUBSCRIPTION:%s] Subscription error: %s",
subscription_name,
error_payload,
)
self.last_error[subscription_name] = (
f"Subscription error: {error_payload}"
self._set_connection_state(
subscription_name,
"error",
f"Subscription error: {error_payload}",
)
self.connection_states[subscription_name] = "error"
elif data.get("type") == "complete":
logger.info(
f"[SUBSCRIPTION:{subscription_name}] Subscription completed by server"
"[SUBSCRIPTION:%s] Subscription completed by server",
subscription_name,
)
self.connection_states[subscription_name] = "completed"
self._set_connection_state(subscription_name, "completed")
break
elif data.get("type") in ["ka", "pong"]:
logger.debug(
f"[PROTOCOL:{subscription_name}] Keepalive message: {message_type}"
"[PROTOCOL:%s] Keepalive message: %s",
subscription_name,
message_type,
)
else:
logger.debug(
f"[PROTOCOL:{subscription_name}] Unhandled message type: {message_type}"
"[PROTOCOL:%s] Unhandled message type: %s",
subscription_name,
message_type,
)
except json.JSONDecodeError as e:
logger.error(
f"[PROTOCOL:{subscription_name}] Failed to decode message: {_preview(message)}..."
"[PROTOCOL:%s] Failed to decode message: %s...",
subscription_name,
_preview(message),
)
logger.error(
"[PROTOCOL:%s] JSON decode error: %s",
subscription_name,
e,
)
logger.error(f"[PROTOCOL:{subscription_name}] JSON decode error: {e}")
except Exception as e:
logger.error(
f"[DATA:{subscription_name}] Error processing message: {e}",
@ -585,35 +615,30 @@ class SubscriptionManager:
except TimeoutError:
error_msg = "Connection or authentication timeout"
logger.error(f"[WEBSOCKET:{subscription_name}] {error_msg}")
self.last_error[subscription_name] = error_msg
self.connection_states[subscription_name] = "timeout"
self._set_connection_state(subscription_name, "timeout", error_msg)
except websockets.exceptions.ConnectionClosed as e:
error_msg = f"WebSocket connection closed: {e}"
logger.warning(f"[WEBSOCKET:{subscription_name}] {error_msg}")
self.last_error[subscription_name] = error_msg
self.connection_states[subscription_name] = "disconnected"
self._set_connection_state(subscription_name, "disconnected", error_msg)
except websockets.exceptions.InvalidURI as e:
error_msg = f"Invalid WebSocket URI: {e}"
logger.error(f"[WEBSOCKET:{subscription_name}] {error_msg}")
self.last_error[subscription_name] = error_msg
self.connection_states[subscription_name] = "invalid_uri"
self._set_connection_state(subscription_name, "invalid_uri", error_msg)
break # Don't retry on invalid URI
except ValueError as e:
# Non-retryable configuration error (e.g. UNRAID_API_URL not set)
error_msg = f"Configuration error: {e}"
logger.error(f"[WEBSOCKET:{subscription_name}] {error_msg}")
self.last_error[subscription_name] = error_msg
self.connection_states[subscription_name] = "error"
self._set_connection_state(subscription_name, "error", error_msg)
break # Don't retry on configuration errors
except Exception as e:
error_msg = f"Unexpected error: {e}"
logger.error(f"[WEBSOCKET:{subscription_name}] {error_msg}", exc_info=True)
self.last_error[subscription_name] = error_msg
self.connection_states[subscription_name] = "error"
self._set_connection_state(subscription_name, "error", error_msg)
# Check if connection was stable before deciding on retry behavior
start_time = self._connection_start_times.pop(subscription_name, None)
@ -642,7 +667,7 @@ class SubscriptionManager:
logger.info(
f"[WEBSOCKET:{subscription_name}] Reconnecting in {retry_delay:.1f} seconds..."
)
self.connection_states[subscription_name] = "reconnecting"
self._set_connection_state(subscription_name, "reconnecting")
await asyncio.sleep(retry_delay)
# The while loop exited (via break or max_retries exceeded).
@ -657,21 +682,35 @@ class SubscriptionManager:
async def get_resource_data(self, resource_name: str) -> dict[str, Any] | None:
"""Get current resource data with enhanced logging."""
logger.debug(f"[RESOURCE:{resource_name}] Resource data requested")
logger.debug("[RESOURCE:%s] Resource data requested", resource_name)
async with self._data_lock:
if resource_name in self.resource_data:
data = self.resource_data[resource_name]
age_seconds = (datetime.now(UTC) - data.last_updated).total_seconds()
logger.debug(f"[RESOURCE:{resource_name}] Data found, age: {age_seconds:.1f}s")
logger.debug("[RESOURCE:%s] Data found, age: %.1fs", resource_name, age_seconds)
return data.data
logger.debug(f"[RESOURCE:{resource_name}] No data available")
logger.debug("[RESOURCE:%s] No data available", resource_name)
return None
def list_active_subscriptions(self) -> list[str]:
async def get_resource_data_with_timestamp(
self, resource_name: str
) -> tuple[dict[str, Any], str] | None:
"""Get resource data along with its last-updated ISO timestamp.
Returns (data_dict, iso_timestamp) or None if no data is available.
"""
async with self._data_lock:
if resource_name in self.resource_data:
entry = self.resource_data[resource_name]
return entry.data, entry.last_updated.isoformat()
return None
async def list_active_subscriptions(self) -> list[str]:
"""List all active subscriptions."""
active = list(self.active_subscriptions.keys())
logger.debug(f"[SUBSCRIPTION_MANAGER] Active subscriptions: {active}")
async with self._task_lock:
active = list(self.active_subscriptions.keys())
logger.debug("[SUBSCRIPTION_MANAGER] Active subscriptions: %s", active)
return active
async def get_error_state(self, name: str) -> tuple[str | None, str]:
@ -681,12 +720,10 @@ class SubscriptionManager:
private lock attributes directly.
Consistency note: _subscription_loop writes connection_states and
last_error without holding _task_lock (it runs as an asyncio Task with
no await between the two writes). The pair is consistent because asyncio
cooperative scheduling prevents interleaving at non-await points not
because of the lock. _task_lock here prevents two *readers* from racing
each other, not writers. If an await is ever introduced between the two
write sites in _subscription_loop, this guarantee breaks.
last_error together via _set_connection_state() which contains no
``await``, so asyncio cooperative scheduling guarantees the pair is
consistent. _task_lock here prevents two *readers* from racing each
other, not writers.
"""
async with self._task_lock:
return (

View file

@ -40,6 +40,12 @@ SNAPSHOT_ACTIONS = {
"server_status": """
subscription { serversSubscription { id name status guid wanip lanip localurl remoteurl } }
""",
"docker_container_stats": """
subscription { dockerContainerStats { id cpuPercent memUsage memPercent netIO blockIO } }
""",
"temperature": """
subscription { systemMetricsTemperature { id sensors { id name type location current { value unit status } } summary { average hottest { id name current { value unit status } } coolest { id name current { value unit status } } warningCount criticalCount } } }
""",
}
COLLECT_ACTIONS = {

View file

@ -97,9 +97,10 @@ def register_subscription_resources(mcp: FastMCP) -> None:
async def logs_stream_resource() -> str:
"""Real-time log stream data from subscription."""
await ensure_subscriptions_started()
data = await subscription_manager.get_resource_data("logFileSubscription")
if data is not None:
return json.dumps(data, indent=2)
result = await subscription_manager.get_resource_data_with_timestamp("logFileSubscription")
if result is not None:
data, fetched_at = result
return json.dumps({**data, "_fetched_at": fetched_at}, indent=2)
return json.dumps(
{
"status": "No subscription data yet",
@ -110,9 +111,10 @@ def register_subscription_resources(mcp: FastMCP) -> None:
def _make_resource_fn(action: str) -> Callable[[], Coroutine[Any, Any, str]]:
async def _live_resource() -> str:
await ensure_subscriptions_started()
data = await subscription_manager.get_resource_data(action)
if data is not None:
return json.dumps(data, indent=2)
result = await subscription_manager.get_resource_data_with_timestamp(action)
if result is not None:
data, fetched_at = result
return json.dumps({**data, "_fetched_at": fetched_at}, indent=2)
# Surface permanent errors only when the connection is in a terminal failure
# state — if the subscription has since reconnected, ignore the stale error.
# Use the public get_error_state() accessor so we never touch private

View file

@ -13,6 +13,7 @@ Use the SubscriptionManager for long-lived monitoring resources.
import asyncio
import json
from contextlib import asynccontextmanager
from typing import Any
import websockets
@ -23,14 +24,19 @@ from ..core.exceptions import ToolError
from .utils import build_connection_init, build_ws_ssl_context, build_ws_url
async def subscribe_once(
_SUB_ID = "snapshot-1"
@asynccontextmanager
async def _ws_handshake(
query: str,
variables: dict[str, Any] | None = None,
timeout: float = 10.0, # noqa: ASYNC109
) -> dict[str, Any]:
"""Open a WebSocket subscription, receive the first data event, close, return it.
timeout: float = 10.0,
):
"""Connect, authenticate, and subscribe over WebSocket.
Raises ToolError on auth failure, GraphQL errors, or timeout.
Yields (ws, proto, expected_type) after the subscription is active.
The caller iterates on ws for data events.
"""
ws_url = build_ws_url()
ssl_context = build_ws_ssl_context(ws_url)
@ -44,7 +50,6 @@ async def subscribe_once(
ssl=ssl_context,
) as ws:
proto = ws.subprotocol or "graphql-transport-ws"
sub_id = "snapshot-1"
# Handshake
await ws.send(json.dumps(build_connection_init()))
@ -61,16 +66,27 @@ async def subscribe_once(
await ws.send(
json.dumps(
{
"id": sub_id,
"id": _SUB_ID,
"type": start_type,
"payload": {"query": query, "variables": variables or {}},
}
)
)
# Await first matching data event
expected_type = "next" if proto == "graphql-transport-ws" else "data"
yield ws, expected_type
async def subscribe_once(
query: str,
variables: dict[str, Any] | None = None,
timeout: float = 10.0, # noqa: ASYNC109
) -> dict[str, Any]:
"""Open a WebSocket subscription, receive the first data event, close, return it.
Raises ToolError on auth failure, GraphQL errors, or timeout.
"""
async with _ws_handshake(query, variables, timeout) as (ws, expected_type):
try:
async with asyncio.timeout(timeout):
async for raw_msg in ws:
@ -78,19 +94,19 @@ async def subscribe_once(
if msg.get("type") == "ping":
await ws.send(json.dumps({"type": "pong"}))
continue
if msg.get("type") == expected_type and msg.get("id") == sub_id:
if msg.get("type") == expected_type and msg.get("id") == _SUB_ID:
payload = msg.get("payload", {})
if errors := payload.get("errors"):
msgs = "; ".join(e.get("message", str(e)) for e in errors)
raise ToolError(f"Subscription errors: {msgs}")
if data := payload.get("data"):
return data
elif msg.get("type") == "error" and msg.get("id") == sub_id:
elif msg.get("type") == "error" and msg.get("id") == _SUB_ID:
raise ToolError(f"Subscription error: {msg.get('payload')}")
except TimeoutError:
raise ToolError(f"Subscription timed out after {timeout:.0f}s") from None
raise ToolError("WebSocket closed before receiving subscription data")
raise ToolError("WebSocket closed before receiving subscription data")
async def subscribe_collect(
@ -104,43 +120,9 @@ async def subscribe_collect(
Returns an empty list if no events arrive within the window.
Always closes the connection after the window expires.
"""
ws_url = build_ws_url()
ssl_context = build_ws_ssl_context(ws_url)
events: list[dict[str, Any]] = []
async with websockets.connect(
ws_url,
subprotocols=[Subprotocol("graphql-transport-ws"), Subprotocol("graphql-ws")],
open_timeout=timeout,
ping_interval=20,
ping_timeout=10,
ssl=ssl_context,
) as ws:
proto = ws.subprotocol or "graphql-transport-ws"
sub_id = "snapshot-1"
await ws.send(json.dumps(build_connection_init()))
raw = await asyncio.wait_for(ws.recv(), timeout=timeout)
ack = json.loads(raw)
if ack.get("type") == "connection_error":
raise ToolError(f"Subscription auth failed: {ack.get('payload')}")
if ack.get("type") != "connection_ack":
raise ToolError(f"Unexpected handshake response: {ack.get('type')}")
start_type = "subscribe" if proto == "graphql-transport-ws" else "start"
await ws.send(
json.dumps(
{
"id": sub_id,
"type": start_type,
"payload": {"query": query, "variables": variables or {}},
}
)
)
expected_type = "next" if proto == "graphql-transport-ws" else "data"
async with _ws_handshake(query, variables, timeout) as (ws, expected_type):
try:
async with asyncio.timeout(collect_for):
async for raw_msg in ws:
@ -148,7 +130,7 @@ async def subscribe_collect(
if msg.get("type") == "ping":
await ws.send(json.dumps({"type": "pong"}))
continue
if msg.get("type") == expected_type and msg.get("id") == sub_id:
if msg.get("type") == expected_type and msg.get("id") == _SUB_ID:
payload = msg.get("payload", {})
if errors := payload.get("errors"):
msgs = "; ".join(e.get("message", str(e)) for e in errors)
@ -158,5 +140,5 @@ async def subscribe_collect(
except TimeoutError:
pass # Collection window expired — return whatever was collected
logger.debug(f"[SNAPSHOT] Collected {len(events)} events in {collect_for}s window")
logger.debug("Collected %d events in %.1fs window", len(events), collect_for)
return events

View file

@ -12,6 +12,7 @@ from fastmcp import Context
from ..config.logging import logger
from ..core import client as _client
from ..core.exceptions import ToolError, tool_error_handler
from ..core.utils import validate_subaction
from ..core.guards import gate_destructive_action
@ -50,10 +51,7 @@ async def _handle_array(
ctx: Context | None,
confirm: bool,
) -> dict[str, Any]:
if subaction not in _ARRAY_SUBACTIONS:
raise ToolError(
f"Invalid subaction '{subaction}' for array. Must be one of: {sorted(_ARRAY_SUBACTIONS)}"
)
validate_subaction(subaction, _ARRAY_SUBACTIONS, "array")
await gate_destructive_action(
ctx,

View file

@ -8,6 +8,7 @@ from typing import Any
from ..config.logging import logger
from ..core import client as _client
from ..core.exceptions import ToolError, tool_error_handler
from ..core.utils import safe_get, validate_subaction
# ===========================================================================
@ -29,10 +30,7 @@ _CUSTOMIZATION_SUBACTIONS: set[str] = set(_CUSTOMIZATION_QUERIES) | set(_CUSTOMI
async def _handle_customization(subaction: str, theme_name: str | None) -> dict[str, Any]:
if subaction not in _CUSTOMIZATION_SUBACTIONS:
raise ToolError(
f"Invalid subaction '{subaction}' for customization. Must be one of: {sorted(_CUSTOMIZATION_SUBACTIONS)}"
)
validate_subaction(subaction, _CUSTOMIZATION_SUBACTIONS, "customization")
with tool_error_handler("customization", subaction, logger):
logger.info(f"Executing unraid action=customization subaction={subaction}")
@ -54,7 +52,7 @@ async def _handle_customization(subaction: str, theme_name: str | None) -> dict[
return {
"success": True,
"subaction": "set_theme",
"data": (data.get("customization") or {}).get("setTheme"),
"data": safe_get(data, "customization", "setTheme"),
}
raise ToolError(f"Unhandled customization subaction '{subaction}' — this is a bug")

View file

@ -13,7 +13,7 @@ from ..core import client as _client
from ..core.client import DISK_TIMEOUT
from ..core.exceptions import ToolError, tool_error_handler
from ..core.guards import gate_destructive_action
from ..core.utils import format_bytes
from ..core.utils import format_bytes, validate_subaction
# ===========================================================================
@ -75,10 +75,7 @@ async def _handle_disk(
ctx: Context | None,
confirm: bool,
) -> dict[str, Any]:
if subaction not in _DISK_SUBACTIONS:
raise ToolError(
f"Invalid subaction '{subaction}' for disk. Must be one of: {sorted(_DISK_SUBACTIONS)}"
)
validate_subaction(subaction, _DISK_SUBACTIONS, "disk")
await gate_destructive_action(
ctx,
@ -88,52 +85,54 @@ async def _handle_disk(
f"Back up flash drive to **{remote_name}:{destination_path}**. Existing backups will be overwritten.",
)
if subaction == "disk_details" and not disk_id:
raise ToolError("disk_id is required for disk/disk_details")
with tool_error_handler("disk", subaction, logger):
if subaction == "disk_details" and not disk_id:
raise ToolError("disk_id is required for disk/disk_details")
if subaction == "logs":
if tail_lines < 1 or tail_lines > _MAX_TAIL_LINES:
raise ToolError(f"tail_lines must be between 1 and {_MAX_TAIL_LINES}, got {tail_lines}")
if not log_path:
raise ToolError("log_path is required for disk/logs")
log_path = _validate_path(log_path, _ALLOWED_LOG_PREFIXES, "log_path")
if subaction == "logs":
if tail_lines < 1 or tail_lines > _MAX_TAIL_LINES:
raise ToolError(
f"tail_lines must be between 1 and {_MAX_TAIL_LINES}, got {tail_lines}"
)
if not log_path:
raise ToolError("log_path is required for disk/logs")
log_path = _validate_path(log_path, _ALLOWED_LOG_PREFIXES, "log_path")
if subaction == "flash_backup":
if not remote_name:
raise ToolError("remote_name is required for disk/flash_backup")
if not source_path:
raise ToolError("source_path is required for disk/flash_backup")
if not destination_path:
raise ToolError("destination_path is required for disk/flash_backup")
# Validate paths — flash backup source must come from /boot/ only.
# NOTE: _validate_path is not reused here because its prefix check uses
# startswith(), which would allow '/bootleg/...' to pass '/boot' prefix.
# The correct check is (normalized == "/boot" or startswith("/boot/")),
# which requires an inline implementation.
if "\x00" in source_path:
raise ToolError("source_path must not contain null bytes")
# Normalize BEFORE checking '..' — raw-string check is bypassable via
# encoded sequences like 'foo/bar/../..'.
normalized = posixpath.normpath(source_path)
if ".." in normalized.split("/"):
raise ToolError("source_path must not contain path traversal sequences (../)")
if not (normalized == "/boot" or normalized.startswith("/boot/")):
raise ToolError("source_path must start with /boot/ (flash drive only)")
source_path = normalized
if "\x00" in destination_path:
raise ToolError("destination_path must not contain null bytes")
normalized_dest = posixpath.normpath(destination_path)
if ".." in normalized_dest.split("/"):
raise ToolError("destination_path must not contain path traversal sequences (../)")
destination_path = normalized_dest
input_data: dict[str, Any] = {
"remoteName": remote_name,
"sourcePath": source_path,
"destinationPath": destination_path,
}
if backup_options is not None:
input_data["options"] = backup_options
with tool_error_handler("disk", subaction, logger):
if subaction == "flash_backup":
if not remote_name:
raise ToolError("remote_name is required for disk/flash_backup")
if not source_path:
raise ToolError("source_path is required for disk/flash_backup")
if not destination_path:
raise ToolError("destination_path is required for disk/flash_backup")
# Validate paths — flash backup source must come from /boot/ only.
# NOTE: _validate_path is not reused here because its prefix check uses
# startswith(), which would allow '/bootleg/...' to pass '/boot' prefix.
# The correct check is (normalized == "/boot" or startswith("/boot/")),
# which requires an inline implementation.
if "\x00" in source_path:
raise ToolError("source_path must not contain null bytes")
# Normalize BEFORE checking '..' <20><> raw-string check is bypassable via
# encoded sequences like 'foo/bar/../..'.
normalized = posixpath.normpath(source_path)
if ".." in normalized.split("/"):
raise ToolError("source_path must not contain path traversal sequences (../)")
if not (normalized == "/boot" or normalized.startswith("/boot/")):
raise ToolError("source_path must start with /boot/ (flash drive only)")
source_path = normalized
if "\x00" in destination_path:
raise ToolError("destination_path must not contain null bytes")
normalized_dest = posixpath.normpath(destination_path)
if ".." in normalized_dest.split("/"):
raise ToolError("destination_path must not contain path traversal sequences (../)")
destination_path = normalized_dest
input_data: dict[str, Any] = {
"remoteName": remote_name,
"sourcePath": source_path,
"destinationPath": destination_path,
}
if backup_options is not None:
input_data["options"] = backup_options
logger.info(
f"Executing unraid action=disk subaction={subaction} remote={remote_name!r} source={source_path!r} dest={destination_path!r}"
)
@ -145,14 +144,13 @@ async def _handle_disk(
raise ToolError("Failed to start flash backup: no confirmation from server")
return {"success": True, "subaction": "flash_backup", "data": backup}
custom_timeout = DISK_TIMEOUT if subaction in ("disks", "disk_details") else None
variables: dict[str, Any] | None = None
if subaction == "disk_details":
variables = {"id": disk_id}
elif subaction == "logs":
variables = {"path": log_path, "lines": tail_lines}
custom_timeout = DISK_TIMEOUT if subaction in ("disks", "disk_details") else None
variables: dict[str, Any] | None = None
if subaction == "disk_details":
variables = {"id": disk_id}
elif subaction == "logs":
variables = {"path": log_path, "lines": tail_lines}
with tool_error_handler("disk", subaction, logger):
logger.info(f"Executing unraid action=disk subaction={subaction}")
data = await _client.make_graphql_request(
_DISK_QUERIES[subaction], variables, custom_timeout=custom_timeout

View file

@ -9,6 +9,7 @@ from typing import Any
from ..config.logging import logger
from ..core import client as _client
from ..core.exceptions import ToolError, tool_error_handler
from ..core.utils import validate_subaction
from ..core.utils import safe_get
@ -101,10 +102,7 @@ async def _resolve_container_id(container_id: str, *, strict: bool = False) -> s
async def _handle_docker(
subaction: str, container_id: str | None, network_id: str | None
) -> dict[str, Any]:
if subaction not in _DOCKER_SUBACTIONS:
raise ToolError(
f"Invalid subaction '{subaction}' for docker. Must be one of: {sorted(_DOCKER_SUBACTIONS)}"
)
validate_subaction(subaction, _DOCKER_SUBACTIONS, "docker")
if subaction in _DOCKER_NEEDS_CONTAINER_ID and not container_id:
raise ToolError(f"container_id is required for docker/{subaction}")
if subaction == "network_details" and not network_id:

View file

@ -14,6 +14,7 @@ import httpx
from ..config.logging import logger
from ..core import client as _client
from ..core.exceptions import CredentialsNotConfiguredError
from ..core.utils import safe_get
# Import system online query to avoid duplication — used by setup and test_connection
@ -85,8 +86,8 @@ async def _comprehensive_health_check() -> dict[str, Any]:
"status": "connected",
"url": safe_display_url(UNRAID_API_URL),
"machine_id": info.get("machineId"),
"version": ((info.get("versions") or {}).get("core") or {}).get("unraid"),
"uptime": (info.get("os") or {}).get("uptime"),
"version": safe_get(info, "versions", "core", "unraid"),
"uptime": safe_get(info, "os", "uptime"),
}
else:
_escalate("degraded")

View file

@ -1,6 +1,6 @@
"""Key domain handler for the Unraid MCP tool.
Covers: list, get, create, update, delete*, add_role, remove_role (7 subactions).
Covers: list, get, possible_roles, create, update, delete*, add_role, remove_role (8 subactions).
"""
from typing import Any
@ -11,6 +11,7 @@ from ..config.logging import logger
from ..core import client as _client
from ..core.exceptions import ToolError, tool_error_handler
from ..core.guards import gate_destructive_action
from ..core.utils import safe_get, validate_subaction
# ===========================================================================
@ -20,6 +21,7 @@ from ..core.guards import gate_destructive_action
_KEY_QUERIES: dict[str, str] = {
"list": "query ListApiKeys { apiKeys { id name roles permissions { resource actions } createdAt } }",
"get": "query GetApiKey($id: PrefixedID!) { apiKey(id: $id) { id name roles permissions { resource actions } createdAt } }",
"possible_roles": "query GetPossibleRoles { apiKeyPossibleRoles }",
}
_KEY_MUTATIONS: dict[str, str] = {
@ -43,10 +45,7 @@ async def _handle_key(
ctx: Context | None,
confirm: bool,
) -> dict[str, Any]:
if subaction not in _KEY_SUBACTIONS:
raise ToolError(
f"Invalid subaction '{subaction}' for key. Must be one of: {sorted(_KEY_SUBACTIONS)}"
)
validate_subaction(subaction, _KEY_SUBACTIONS, "key")
await gate_destructive_action(
ctx,
@ -64,6 +63,11 @@ async def _handle_key(
keys = data.get("apiKeys", [])
return {"keys": list(keys) if isinstance(keys, list) else []}
if subaction == "possible_roles":
data = await _client.make_graphql_request(_KEY_QUERIES["possible_roles"])
roles_list = data.get("apiKeyPossibleRoles", [])
return {"roles": list(roles_list) if isinstance(roles_list, list) else []}
if subaction == "get":
if not key_id:
raise ToolError("key_id is required for key/get")
@ -81,7 +85,7 @@ async def _handle_key(
data = await _client.make_graphql_request(
_KEY_MUTATIONS["create"], {"input": input_data}
)
created_key = (data.get("apiKey") or {}).get("create")
created_key = safe_get(data, "apiKey", "create")
if not created_key:
raise ToolError("Failed to create API key: no data returned from server")
return {"success": True, "key": created_key}
@ -99,7 +103,7 @@ async def _handle_key(
data = await _client.make_graphql_request(
_KEY_MUTATIONS["update"], {"input": input_data}
)
updated_key = (data.get("apiKey") or {}).get("update")
updated_key = safe_get(data, "apiKey", "update")
if not updated_key:
raise ToolError("Failed to update API key: no data returned from server")
return {"success": True, "key": updated_key}
@ -110,7 +114,7 @@ async def _handle_key(
data = await _client.make_graphql_request(
_KEY_MUTATIONS["delete"], {"input": {"ids": [key_id]}}
)
if not (data.get("apiKey") or {}).get("delete"):
if not safe_get(data, "apiKey", "delete"):
raise ToolError(f"Failed to delete API key '{key_id}': no confirmation from server")
return {"success": True, "message": f"API key '{key_id}' deleted"}

View file

@ -1,7 +1,8 @@
"""Live (subscriptions) domain handler for the Unraid MCP tool.
Covers: cpu, memory, cpu_telemetry, array_state, parity_progress, ups_status,
notifications_overview, notification_feed, log_tail, owner, server_status (11 subactions).
notifications_overview, notification_feed, log_tail, owner, server_status,
docker_container_stats, temperature (13 subactions).
"""
from typing import Any
@ -15,37 +16,6 @@ from ._disk import _ALLOWED_LOG_PREFIXES, _validate_path
# LIVE (subscriptions)
# ===========================================================================
# Tracks which COLLECT_ACTIONS keys have explicit handlers in _handle_live below.
# IMPORTANT: Every key in COLLECT_ACTIONS must appear here AND have a matching
# if-branch in _handle_live. Adding to COLLECT_ACTIONS without updating both
# this set and the function body causes a ToolError("this is a bug") at runtime.
_HANDLED_COLLECT_SUBACTIONS: frozenset[str] = frozenset({"log_tail", "notification_feed"})
def _assert_collect_subactions_complete() -> None:
"""Raise RuntimeError at import time if collect subactions drift.
Every key in COLLECT_ACTIONS must appear in _HANDLED_COLLECT_SUBACTIONS AND
have a matching if-branch in _handle_live. This assertion catches the former
at import time so the omission is caught before reaching the runtime ToolError.
"""
from ..subscriptions.queries import COLLECT_ACTIONS
missing = set(COLLECT_ACTIONS) - _HANDLED_COLLECT_SUBACTIONS
stale = _HANDLED_COLLECT_SUBACTIONS - set(COLLECT_ACTIONS)
if missing:
raise RuntimeError(
f"_HANDLED_COLLECT_SUBACTIONS is missing keys from COLLECT_ACTIONS: {missing}. "
"Add a handler branch in _handle_live and update _HANDLED_COLLECT_SUBACTIONS."
)
if stale:
raise RuntimeError(
f"_HANDLED_COLLECT_SUBACTIONS contains stale keys not present in COLLECT_ACTIONS: {stale}."
)
_assert_collect_subactions_complete()
async def _handle_live(
subaction: str,
@ -53,16 +23,14 @@ async def _handle_live(
collect_for: float,
timeout: float, # noqa: ASYNC109
) -> dict[str, Any]:
# IMPORTANT: Every key in COLLECT_ACTIONS must have an explicit handler in _handle_live below.
# Adding to COLLECT_ACTIONS without updating this function causes a ToolError at runtime.
from ..core.utils import validate_subaction
from ..subscriptions.queries import COLLECT_ACTIONS, EVENT_DRIVEN_ACTIONS, SNAPSHOT_ACTIONS
from ..subscriptions.snapshot import subscribe_collect, subscribe_once
# IMPORTANT: Every key in COLLECT_ACTIONS must have an explicit handler in _handle_live below.
# Adding to COLLECT_ACTIONS without updating this function causes a ToolError at runtime.
all_live = set(SNAPSHOT_ACTIONS) | set(COLLECT_ACTIONS)
if subaction not in all_live:
raise ToolError(
f"Invalid subaction '{subaction}' for live. Must be one of: {sorted(all_live)}"
)
validate_subaction(subaction, all_live, "live")
if subaction == "log_tail":
if not path:

View file

@ -1,7 +1,7 @@
"""Notification domain handler for the Unraid MCP tool.
Covers: overview, list, create, archive, mark_unread, recalculate, archive_all,
archive_many, unarchive_many, unarchive_all, delete*, delete_archived* (12 subactions).
Covers: overview, list, create, notify_if_unique, archive, mark_unread, recalculate,
archive_all, archive_many, unarchive_many, unarchive_all, delete*, delete_archived* (13 subactions).
"""
from typing import Any
@ -12,6 +12,7 @@ from ..config.logging import logger
from ..core import client as _client
from ..core.exceptions import ToolError, tool_error_handler
from ..core.guards import gate_destructive_action
from ..core.utils import safe_get, validate_subaction
# ===========================================================================
@ -34,6 +35,7 @@ _NOTIFICATION_MUTATIONS: dict[str, str] = {
"unarchive_many": "mutation UnarchiveNotifications($ids: [PrefixedID!]!) { unarchiveNotifications(ids: $ids) { unread { info warning alert total } archive { info warning alert total } } }",
"unarchive_all": "mutation UnarchiveAll($importance: NotificationImportance) { unarchiveAll(importance: $importance) { unread { info warning alert total } archive { info warning alert total } } }",
"recalculate": "mutation RecalculateOverview { recalculateOverview { unread { info warning alert total } archive { info warning alert total } } }",
"notify_if_unique": "mutation NotifyIfUnique($input: NotificationData!) { notifyIfUnique(input: $input) { id title importance } }",
}
_NOTIFICATION_SUBACTIONS: set[str] = set(_NOTIFICATION_QUERIES) | set(_NOTIFICATION_MUTATIONS)
@ -57,10 +59,7 @@ async def _handle_notification(
subject: str | None,
description: str | None,
) -> dict[str, Any]:
if subaction not in _NOTIFICATION_SUBACTIONS:
raise ToolError(
f"Invalid subaction '{subaction}' for notification. Must be one of: {sorted(_NOTIFICATION_SUBACTIONS)}"
)
validate_subaction(subaction, _NOTIFICATION_SUBACTIONS, "notification")
await gate_destructive_action(
ctx,
@ -91,7 +90,7 @@ async def _handle_notification(
if subaction == "overview":
data = await _client.make_graphql_request(_NOTIFICATION_QUERIES["overview"])
return dict((data.get("notifications") or {}).get("overview") or {})
return dict(safe_get(data, "notifications", "overview", default={}))
if subaction == "list":
filter_vars: dict[str, Any] = {
@ -104,7 +103,7 @@ async def _handle_notification(
data = await _client.make_graphql_request(
_NOTIFICATION_QUERIES["list"], {"filter": filter_vars}
)
return {"notifications": (data.get("notifications", {}) or {}).get("list", [])}
return {"notifications": safe_get(data, "notifications", "list", default=[])}
if subaction == "create":
if title is None or subject is None or description is None or importance is None:
@ -133,6 +132,39 @@ async def _handle_notification(
raise ToolError("Notification creation failed: server returned no data")
return {"success": True, "notification": notif}
if subaction == "notify_if_unique":
if title is None or subject is None or description is None or importance is None:
raise ToolError(
"notify_if_unique requires title, subject, description, and importance"
)
if len(title) > 200:
raise ToolError(f"title must be at most 200 characters (got {len(title)})")
if len(subject) > 500:
raise ToolError(f"subject must be at most 500 characters (got {len(subject)})")
if len(description) > 2000:
raise ToolError(
f"description must be at most 2000 characters (got {len(description)})"
)
data = await _client.make_graphql_request(
_NOTIFICATION_MUTATIONS["notify_if_unique"],
{
"input": {
"title": title,
"subject": subject,
"description": description,
"importance": importance.upper(),
}
},
)
notif = data.get("notifyIfUnique")
if notif is None:
return {
"success": True,
"duplicate": True,
"message": "Equivalent unread notification already exists",
}
return {"success": True, "duplicate": False, "notification": notif}
if subaction in ("archive", "mark_unread"):
if not notification_id:
raise ToolError(f"notification_id is required for notification/{subaction}")

View file

@ -8,6 +8,7 @@ from typing import Any
from ..config.logging import logger
from ..core import client as _client
from ..core.exceptions import ToolError, tool_error_handler
from ..core.utils import validate_subaction
# ===========================================================================
@ -28,10 +29,7 @@ _OIDC_SUBACTIONS: set[str] = set(_OIDC_QUERIES)
async def _handle_oidc(
subaction: str, provider_id: str | None, token: str | None
) -> dict[str, Any]:
if subaction not in _OIDC_SUBACTIONS:
raise ToolError(
f"Invalid subaction '{subaction}' for oidc. Must be one of: {sorted(_OIDC_SUBACTIONS)}"
)
validate_subaction(subaction, _OIDC_SUBACTIONS, "oidc")
if subaction == "provider" and not provider_id:
raise ToolError("provider_id is required for oidc/provider")

View file

@ -10,6 +10,7 @@ from fastmcp import Context
from ..config.logging import logger
from ..core import client as _client
from ..core.exceptions import ToolError, tool_error_handler
from ..core.utils import validate_subaction
from ..core.guards import gate_destructive_action
@ -38,10 +39,7 @@ async def _handle_plugin(
ctx: Context | None,
confirm: bool,
) -> dict[str, Any]:
if subaction not in _PLUGIN_SUBACTIONS:
raise ToolError(
f"Invalid subaction '{subaction}' for plugin. Must be one of: {sorted(_PLUGIN_SUBACTIONS)}"
)
validate_subaction(subaction, _PLUGIN_SUBACTIONS, "plugin")
await gate_destructive_action(
ctx,

View file

@ -11,7 +11,8 @@ from ..config.logging import logger
from ..core import client as _client
from ..core.exceptions import ToolError, tool_error_handler
from ..core.guards import gate_destructive_action
from ..core.validation import DANGEROUS_KEY_PATTERN, MAX_VALUE_LENGTH
from ..core.utils import safe_get, validate_subaction
from ..core.validation import DANGEROUS_KEY_PATTERN, validate_scalar_mapping
# ===========================================================================
@ -31,28 +32,15 @@ _RCLONE_MUTATIONS: dict[str, str] = {
_RCLONE_SUBACTIONS: set[str] = set(_RCLONE_QUERIES) | set(_RCLONE_MUTATIONS)
_RCLONE_DESTRUCTIVE: set[str] = {"delete_remote"}
_MAX_CONFIG_KEYS = 50
_MAX_NAME_LENGTH = 128
def _validate_rclone_config(config_data: dict[str, Any]) -> dict[str, str]:
if len(config_data) > _MAX_CONFIG_KEYS:
raise ToolError(f"config_data has {len(config_data)} keys (max {_MAX_CONFIG_KEYS})")
validated: dict[str, str] = {}
for key, value in config_data.items():
if not isinstance(key, str) or not key.strip():
raise ToolError(
f"config_data keys must be non-empty strings, got: {type(key).__name__}"
)
if DANGEROUS_KEY_PATTERN.search(key):
raise ToolError(f"config_data key '{key}' contains disallowed characters")
if not isinstance(value, (str, int, float, bool)):
raise ToolError(f"config_data['{key}'] must be a string, number, or boolean")
str_value = str(value)
if len(str_value) > MAX_VALUE_LENGTH:
raise ToolError(
f"config_data['{key}'] value exceeds max length ({len(str_value)} > {MAX_VALUE_LENGTH})"
)
validated[key] = str_value
return validated
def _validate_rclone_name(value: str, label: str) -> None:
"""Validate a top-level rclone field (name or provider_type)."""
if len(value) > _MAX_NAME_LENGTH:
raise ToolError(f"{label} exceeds max length ({len(value)} > {_MAX_NAME_LENGTH})")
if DANGEROUS_KEY_PATTERN.search(value):
raise ToolError(f"{label} contains disallowed characters")
async def _handle_rclone(
@ -63,10 +51,7 @@ async def _handle_rclone(
ctx: Context | None,
confirm: bool,
) -> dict[str, Any]:
if subaction not in _RCLONE_SUBACTIONS:
raise ToolError(
f"Invalid subaction '{subaction}' for rclone. Must be one of: {sorted(_RCLONE_SUBACTIONS)}"
)
validate_subaction(subaction, _RCLONE_SUBACTIONS, "rclone")
await gate_destructive_action(
ctx,
@ -81,7 +66,7 @@ async def _handle_rclone(
if subaction == "list_remotes":
data = await _client.make_graphql_request(_RCLONE_QUERIES["list_remotes"])
remotes = data.get("rclone", {}).get("remotes", [])
remotes = safe_get(data, "rclone", "remotes", default=[])
return {"remotes": list(remotes) if isinstance(remotes, list) else []}
if subaction == "config_form":
@ -91,7 +76,7 @@ async def _handle_rclone(
data = await _client.make_graphql_request(
_RCLONE_QUERIES["config_form"], variables or None
)
form = (data.get("rclone") or {}).get("configForm", {})
form = safe_get(data, "rclone", "configForm", default={})
if not form:
raise ToolError("No RClone config form data received")
return dict(form)
@ -99,12 +84,16 @@ async def _handle_rclone(
if subaction == "create_remote":
if name is None or provider_type is None or config_data is None:
raise ToolError("create_remote requires name, provider_type, and config_data")
validated = _validate_rclone_config(config_data)
_validate_rclone_name(name, "name")
_validate_rclone_name(provider_type, "provider_type")
validated = validate_scalar_mapping(
config_data, "config_data", max_keys=_MAX_CONFIG_KEYS, stringify=True
)
data = await _client.make_graphql_request(
_RCLONE_MUTATIONS["create_remote"],
{"input": {"name": name, "type": provider_type, "parameters": validated}},
)
remote = (data.get("rclone") or {}).get("createRCloneRemote")
remote = safe_get(data, "rclone", "createRCloneRemote")
if not remote:
raise ToolError(f"Failed to create remote '{name}': no confirmation from server")
return {
@ -119,7 +108,7 @@ async def _handle_rclone(
data = await _client.make_graphql_request(
_RCLONE_MUTATIONS["delete_remote"], {"input": {"name": name}}
)
if not (data.get("rclone") or {}).get("deleteRCloneRemote", False):
if not safe_get(data, "rclone", "deleteRCloneRemote", default=False):
raise ToolError(f"Failed to delete remote '{name}'")
return {"success": True, "message": f"Remote '{name}' deleted successfully"}

View file

@ -10,8 +10,9 @@ from fastmcp import Context
from ..config.logging import logger
from ..core import client as _client
from ..core.exceptions import ToolError, tool_error_handler
from ..core.utils import validate_subaction
from ..core.guards import gate_destructive_action
from ..core.validation import DANGEROUS_KEY_PATTERN, MAX_VALUE_LENGTH
from ..core.validation import DANGEROUS_KEY_PATTERN, validate_scalar_mapping
# ===========================================================================
@ -21,45 +22,6 @@ from ..core.validation import DANGEROUS_KEY_PATTERN, MAX_VALUE_LENGTH
_MAX_SETTINGS_KEYS = 100
def _validate_settings_mapping(settings_input: dict[str, Any]) -> dict[str, Any]:
"""Validate flat scalar settings data before forwarding to the Unraid API.
Enforces a key count cap and rejects dangerous key names and oversized values
to prevent unvalidated bulk input from reaching the API. Modeled on
_validate_rclone_config in _rclone.py.
Only scalar values (str, int, float, bool) are accepted dict/list values
cannot be accurately size-checked without JSON serialisation and may carry
nested injection payloads. Callers needing complex values should use the
raw GraphQL API instead.
"""
if len(settings_input) > _MAX_SETTINGS_KEYS:
raise ToolError(f"settings_input has {len(settings_input)} keys (max {_MAX_SETTINGS_KEYS})")
validated: dict[str, Any] = {}
for key, value in settings_input.items():
if not isinstance(key, str) or not key.strip():
raise ToolError(
f"settings_input keys must be non-empty strings, got: {type(key).__name__}"
)
if DANGEROUS_KEY_PATTERN.search(key):
raise ToolError(f"settings_input key '{key}' contains disallowed characters")
if not isinstance(value, (str, int, float, bool)):
raise ToolError(
f"settings_input['{key}'] must be a string, number, or boolean, got: {type(value).__name__}"
)
str_value = str(value)
if len(str_value) > MAX_VALUE_LENGTH:
raise ToolError(
f"settings_input['{key}'] value exceeds max length ({len(str_value)} > {MAX_VALUE_LENGTH})"
)
# Store the original typed value, not str_value — callers (GraphQL mutations)
# expect int/float/bool to arrive as their native types, not stringified.
# _rclone.py differs: it always stringifies because rclone config values are
# strings on the wire; settings mutations accept JSON scalars directly.
validated[key] = value
return validated
def _validate_json_settings_input(settings_input: dict[str, Any]) -> dict[str, Any]:
"""Validate JSON-typed settings input without narrowing valid JSON members."""
if len(settings_input) > _MAX_SETTINGS_KEYS:
@ -92,10 +54,7 @@ async def _handle_setting(
ctx: Context | None,
confirm: bool,
) -> dict[str, Any]:
if subaction not in _SETTING_SUBACTIONS:
raise ToolError(
f"Invalid subaction '{subaction}' for setting. Must be one of: {sorted(_SETTING_SUBACTIONS)}"
)
validate_subaction(subaction, _SETTING_SUBACTIONS, "setting")
await gate_destructive_action(
ctx,
@ -123,7 +82,9 @@ async def _handle_setting(
# Validate ups_config with the same rules as settings_input — key count
# cap, scalar-only values, MAX_VALUE_LENGTH — to prevent unvalidated bulk
# input from reaching the GraphQL mutation.
validated_ups = _validate_settings_mapping(ups_config)
validated_ups = validate_scalar_mapping(
ups_config, "ups_config", max_keys=_MAX_SETTINGS_KEYS
)
data = await _client.make_graphql_request(
_SETTING_MUTATIONS["configure_ups"], {"config": validated_ups}
)

View file

@ -2,7 +2,7 @@
Covers: overview, array, network, registration, variables, metrics, services,
display, config, online, owner, settings, server, servers, flash, ups_devices,
ups_device, ups_config (18 subactions).
ups_device, ups_config, server_time, timezones (20 subactions).
"""
from typing import Any
@ -10,7 +10,7 @@ from typing import Any
from ..config.logging import logger
from ..core import client as _client
from ..core.exceptions import ToolError, tool_error_handler
from ..core.utils import format_kb
from ..core.utils import format_kb, safe_get, validate_subaction
# ===========================================================================
@ -87,6 +87,8 @@ _SYSTEM_QUERIES: dict[str, str] = {
"ups_devices": "query GetUpsDevices { upsDevices { id name model status battery { chargeLevel estimatedRuntime health } power { loadPercentage inputVoltage outputVoltage } } }",
"ups_device": "query GetUpsDevice($id: String!) { upsDeviceById(id: $id) { id name model status battery { chargeLevel estimatedRuntime health } power { loadPercentage inputVoltage outputVoltage } } }",
"ups_config": "query GetUpsConfig { upsConfiguration { service upsCable upsType device batteryLevel minutes timeout killUps upsName } }",
"server_time": "query GetSystemTime { systemTime { currentTime timeZone useNtp ntpServers } }",
"timezones": "query GetTimeZones { timeZoneOptions { value label } }",
}
_SYSTEM_SUBACTIONS: set[str] = set(_SYSTEM_QUERIES)
@ -120,10 +122,7 @@ def _analyze_disk_health(disks: list[dict[str, Any]]) -> dict[str, int]:
async def _handle_system(subaction: str, device_id: str | None) -> dict[str, Any]:
if subaction not in _SYSTEM_SUBACTIONS:
raise ToolError(
f"Invalid subaction '{subaction}' for system. Must be one of: {sorted(_SYSTEM_SUBACTIONS)}"
)
validate_subaction(subaction, _SYSTEM_SUBACTIONS, "system")
if subaction == "ups_device" and not device_id:
raise ToolError("device_id is required for system/ups_device")
@ -197,7 +196,7 @@ async def _handle_system(subaction: str, device_id: str | None) -> dict[str, Any
return {"summary": summary, "details": raw}
if subaction == "display":
return dict((data.get("info") or {}).get("display") or {})
return dict(safe_get(data, "info", "display", default={}))
if subaction == "online":
return {"online": data.get("online")}
if subaction == "settings":
@ -243,6 +242,7 @@ async def _handle_system(subaction: str, device_id: str | None) -> dict[str, Any
"owner": "owner",
"flash": "flash",
"ups_config": "upsConfiguration",
"server_time": "systemTime",
}
if subaction in simple_dict:
result = data.get(simple_dict[subaction])
@ -259,6 +259,7 @@ async def _handle_system(subaction: str, device_id: str | None) -> dict[str, Any
"services": ("services", "services"),
"servers": ("servers", "servers"),
"ups_devices": ("upsDevices", "ups_devices"),
"timezones": ("timeZoneOptions", "timezones"),
}
if subaction in list_actions:
response_key, output_key = list_actions[subaction]

View file

@ -8,6 +8,7 @@ from typing import Any
from ..config.logging import logger
from ..core import client as _client
from ..core.exceptions import ToolError, tool_error_handler
from ..core.utils import validate_subaction
# ===========================================================================
@ -22,10 +23,7 @@ _USER_SUBACTIONS: set[str] = set(_USER_QUERIES)
async def _handle_user(subaction: str) -> dict[str, Any]:
if subaction not in _USER_SUBACTIONS:
raise ToolError(
f"Invalid subaction '{subaction}' for user. Must be one of: {sorted(_USER_SUBACTIONS)}"
)
validate_subaction(subaction, _USER_SUBACTIONS, "user")
with tool_error_handler("user", subaction, logger):
logger.info("Executing unraid action=user subaction=me")

View file

@ -11,15 +11,19 @@ from ..config.logging import logger
from ..core import client as _client
from ..core.exceptions import ToolError, tool_error_handler
from ..core.guards import gate_destructive_action
from ..core.utils import validate_subaction
# ===========================================================================
# VM
# ===========================================================================
# VmDomain only exposes id/name/state/uuid — no richer detail query exists in the
# Unraid GraphQL schema, so "details" reuses the same query and filters client-side.
_VM_LIST_QUERY = "query ListVMs { vms { id domains { id name state uuid } } }"
_VM_QUERIES: dict[str, str] = {
"list": "query ListVMs { vms { id domains { id name state uuid } } }",
"details": "query ListVMs { vms { id domains { id name state uuid } } }",
"list": _VM_LIST_QUERY,
}
_VM_MUTATIONS: dict[str, str] = {
@ -32,7 +36,7 @@ _VM_MUTATIONS: dict[str, str] = {
"reset": "mutation ResetVM($id: PrefixedID!) { vm { reset(id: $id) } }",
}
_VM_SUBACTIONS: set[str] = set(_VM_QUERIES) | set(_VM_MUTATIONS)
_VM_SUBACTIONS: set[str] = set(_VM_QUERIES) | set(_VM_MUTATIONS) | {"details"}
_VM_DESTRUCTIVE: set[str] = {"force_stop", "reset"}
_VM_MUTATION_FIELDS: dict[str, str] = {"force_stop": "forceStop"}
@ -40,10 +44,7 @@ _VM_MUTATION_FIELDS: dict[str, str] = {"force_stop": "forceStop"}
async def _handle_vm(
subaction: str, vm_id: str | None, ctx: Context | None, confirm: bool
) -> dict[str, Any]:
if subaction not in _VM_SUBACTIONS:
raise ToolError(
f"Invalid subaction '{subaction}' for vm. Must be one of: {sorted(_VM_SUBACTIONS)}"
)
validate_subaction(subaction, _VM_SUBACTIONS, "vm")
if subaction != "list" and not vm_id:
raise ToolError(f"vm_id is required for vm/{subaction}")
@ -71,7 +72,8 @@ async def _handle_vm(
return {"vms": []}
if subaction == "details":
data = await _client.make_graphql_request(_VM_QUERIES["details"])
# VmDomain has no richer fields than list — reuse the same query, filter client-side.
data = await _client.make_graphql_request(_VM_LIST_QUERY)
if not data.get("vms"):
raise ToolError("No VM data returned from server")
vms = data["vms"].get("domains") or data["vms"].get("domain") or []

View file

@ -4,14 +4,14 @@ Provides the `unraid` tool with 15 actions, each routing to domain-specific
subactions via the action + subaction pattern.
Actions:
system - Server info, metrics, network, UPS (19 subactions)
system - Server info, metrics, network, UPS (20 subactions)
health - Health checks, connection test, diagnostics, setup (4 subactions)
array - Parity checks, array state, disk operations (13 subactions)
disk - Shares, physical disks, log files (6 subactions)
docker - Container lifecycle and network inspection (7 subactions)
vm - Virtual machine lifecycle (9 subactions)
notification - System notifications CRUD (12 subactions)
key - API key management (7 subactions)
notification - System notifications CRUD (13 subactions)
key - API key management (8 subactions)
plugin - Plugin management (3 subactions)
rclone - Cloud storage remote management (4 subactions)
setting - System settings and UPS config (2 subactions)
@ -31,6 +31,7 @@ from ..config.logging import logger
from ..core import client as _client
from ..core.exceptions import ToolError, tool_error_handler
from ..core.setup import elicit_and_configure, elicit_reset_confirmation
from ..core.utils import validate_subaction
# Re-exports: domain modules' constants and helpers needed by tests
# Re-export array queries for schema tests
@ -91,10 +92,7 @@ from ._vm import _VM_DESTRUCTIVE, _VM_MUTATIONS, _VM_QUERIES, _handle_vm # noqa
async def _handle_health(subaction: str, ctx: Context | None) -> dict[str, Any] | str:
if subaction not in _HEALTH_SUBACTIONS:
raise ToolError(
f"Invalid subaction '{subaction}' for health. Must be one of: {sorted(_HEALTH_SUBACTIONS)}"
)
validate_subaction(subaction, _HEALTH_SUBACTIONS, "health")
from ..config.settings import (
CREDENTIALS_ENV_PATH,
@ -224,7 +222,7 @@ Single entry point for all operations. Use `action` + `subaction` to select an o
|-----------|------|-------------|
| `action` | str | One of the actions above |
| `subaction` | str | Operation within the action |
| `confirm` | bool | Required for destructive operations (default: False) |
| `confirm` | bool | Set `True` for destructive subactions (marked `*`). Interactive clients are prompted via elicitation; agents and one-shot API callers **must** pass `confirm=True` to bypass elicitation. (default: False) |
| `container_id` | str | Docker container ID or name |
| `vm_id` | str | VM identifier |
| `disk_id` | str | Disk identifier |
@ -395,7 +393,9 @@ def register_unraid_tool(mcp: FastMCP) -> None:
log_tail (requires path=), notification_feed
* Destructive requires confirm=True
* Destructive interactive clients are prompted for confirmation via
elicitation. Agents and non-interactive callers must pass confirm=True
to execute these subactions.
"""
if action == "system":
return await _handle_system(subaction, device_id)

View file

@ -1572,7 +1572,7 @@ wheels = [
[[package]]
name = "unraid-mcp"
version = "1.2.1"
version = "1.2.2"
source = { editable = "." }
dependencies = [
{ name = "fastapi" },