mirror of
https://github.com/bunkerity/bunkerweb
synced 2026-04-21 13:37:48 +00:00
feat: Add new API service
This commit is contained in:
parent
be4d886d4b
commit
caf21af400
84 changed files with 8075 additions and 574 deletions
1
.gitignore
vendored
1
.gitignore
vendored
|
|
@ -9,6 +9,7 @@ package-lock.json
|
|||
/src/ui/*.txt
|
||||
.mypy_cache
|
||||
.cache/
|
||||
.pytest_cache/
|
||||
.env
|
||||
/package*.json
|
||||
src/ui/client/static
|
||||
|
|
|
|||
25
AGENTS.md
Normal file
25
AGENTS.md
Normal file
|
|
@ -0,0 +1,25 @@
|
|||
# Repository Guidelines
|
||||
|
||||
## Project Structure & Module Organization
|
||||
|
||||
Source lives in `src/`: `api/` (FastAPI service), `ui/` (admin UI), `linux/` (distribution packages and service units), `bw/` and `common/` (core WAF logic), plus packaging targets like `all-in-one/` and other platform bundles. Integration assets and manifests are under `examples/`, while reusable configuration templates live in `env/`. MkDocs content for docs.bunkerweb.io sits in `docs/`. System tests, fixtures, and helper scripts are consolidated in `tests/`.
|
||||
|
||||
## Build, Test, and Development Commands
|
||||
|
||||
Bootstrap Python deps per component, e.g. `pip install -r src/api/requirements.txt` or `pip install -r src/ui/requirements.txt`. Build a full appliance image with `docker build -f src/all-in-one/Dockerfile .`. Exercise integrations locally via `python tests/main.py docker`; swap `docker` for `linux`, `autoconf`, `swarm`, or `kubernetes` as needed (set the matching `TEST_DOMAIN*` env vars first). Regenerate docs with `mkdocs serve --watch` from the repo root. Run `pre-commit run --all-files` before pushing to execute the standard formatters and linters.
|
||||
|
||||
## Coding Style & Naming Conventions
|
||||
|
||||
Python uses Black (160 char lines) and Flake8 with ignores defined in `.pre-commit-config.yaml`; prefer snake_case for modules and functions, PascalCase for classes. Lua code is formatted with StyLua (see `stylua.toml`) and linted with Luacheck; follow lowercase module names and descriptive function names. Shell scripts must pass ShellCheck and stay POSIX-compatible unless a `#!/bin/bash` shebang is explicit. Front-end assets follow Prettier defaults.
|
||||
|
||||
## Testing Guidelines
|
||||
|
||||
High-level acceptance suites live in `tests/` and orchestrate Dockerized environments—verify Docker access and required `TEST_DOMAIN*` env vars before running. Add scenario files under `examples/<use-case>/tests.json` with descriptive names; tests should assert observable behavior rather than internals. For unit-style Python additions, provide lightweight checks inside the relevant module and hook them into integration flows when feasible. Capture regressions by replicating failing requests in the automated suites.
|
||||
|
||||
## Commit & Pull Request Guidelines
|
||||
|
||||
Use concise, present-tense messages; the history favors Conventional Commits (`feat:`, `fix:`, `docs:`) or `<component> - …` prefixes. Reference issue IDs when closing or relating tickets. Each PR should include a summary of changes, validation steps (commands or screenshots for UI changes), and updated docs or config when behavior shifts. Coordinate breaking changes with maintainers and flag them clearly in both commit body and PR description.
|
||||
|
||||
## Security & Configuration Tips
|
||||
|
||||
Never commit secrets—use sample files in `env/` or add new templates when introducing config. Review `.trivyignore` and `.gitleaksignore` before adjusting dependencies. When touching TLS, keys, or rule bundles, document rotation steps and default hardening in the accompanying docs update.
|
||||
|
|
@ -1,5 +1,9 @@
|
|||
# Changelog
|
||||
|
||||
## v1.6.5-rc4 - ????/??/??
|
||||
|
||||
- [API] Introduce a dedicated control‑plane service exposing a REST API to programmatically manage BunkerWeb: list/register instances, trigger reload/stop, and manage bans, plugins, jobs, and configurations.
|
||||
|
||||
## v1.6.5-rc3 - ????/??/??
|
||||
|
||||
- [BUGFIX] Fix lua session handling when using redis
|
||||
|
|
|
|||
|
|
@ -4,6 +4,8 @@ First off all, thanks for being here and showing your support to the project !
|
|||
|
||||
We accept many types of contributions whether they are technical or not. Every community feedback, work or help is, and will always be, appreciated.
|
||||
|
||||
Before getting started, review `AGENTS.md` for repository structure, tooling, and workflow expectations.
|
||||
|
||||
## Talk about the project
|
||||
|
||||
The first thing you can do is to talk about the project. You can share it on social media (by the way, you can can also follow us on [LinkedIn](https://www.linkedin.com/company/bunkerity/), [Twitter](https://twitter.com/bunkerity) and [GitHub](https://github.com/bunkerity)), make a blog post about it or simply tell your friends/colleagues that's an awesome project..
|
||||
|
|
|
|||
512
docs/api.md
Normal file
512
docs/api.md
Normal file
|
|
@ -0,0 +1,512 @@
|
|||
# API
|
||||
|
||||
## Overview
|
||||
|
||||
The BunkerWeb API is the control plane used to manage BunkerWeb instances programmatically: list and manage instances, reload/stop, handle bans, plugins, jobs, configs, and more. It exposes a documented FastAPI application with strong authentication, authorization and rate limiting.
|
||||
|
||||
Open the interactive documentation at `/docs` (or `<root_path>/docs` if you set `API_ROOT_PATH`). The OpenAPI schema is available at `/openapi.json`.
|
||||
|
||||
!!! warning "Security"
|
||||
The API is a privileged control plane. Do not expose it on the public Internet without additional protections.
|
||||
|
||||
At a minimum, restrict source IPs (`API_WHITELIST_IPS`), enable authentication (`API_TOKEN` or API users + Biscuit), and consider putting it behind BunkerWeb with an unguessable path and extra access controls.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
The API service requires access to the BunkerWeb database (`DATABASE_URI`). It is usually deployed alongside the Scheduler and optionally the Web UI. The recommended setup is to run BunkerWeb in front as a reverse proxy and isolate the API on an internal network.
|
||||
|
||||
See the quickstart wizard and architecture guidance in the [quickstart guide](quickstart-guide.md).
|
||||
|
||||
## Highlights
|
||||
|
||||
- Instance‑aware: broadcasts operational actions to discovered instances.
|
||||
- Strong auth: Basic for admins, Bearer admin override, or Biscuit ACL for fine‑grained permissions.
|
||||
- IP allowlist and flexible per‑route rate limiting.
|
||||
- Standard health/readiness signals and startup safety checks.
|
||||
|
||||
## Compose boilerplates
|
||||
|
||||
=== "Docker"
|
||||
|
||||
Reverse proxy the API under `/api` with BunkerWeb.
|
||||
|
||||
```yaml
|
||||
x-bw-env: &bw-env
|
||||
# Shared instance control-plane allowlist for BunkerWeb/Scheduler
|
||||
API_WHITELIST_IP: "127.0.0.0/8 10.20.30.0/24"
|
||||
|
||||
services:
|
||||
bunkerweb:
|
||||
image: bunkerity/bunkerweb:1.6.5-rc3
|
||||
ports:
|
||||
- "80:8080/tcp"
|
||||
- "443:8443/tcp"
|
||||
- "443:8443/udp" # QUIC
|
||||
environment:
|
||||
<<: *bw-env
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- bw-universe
|
||||
- bw-services
|
||||
|
||||
bw-scheduler:
|
||||
image: bunkerity/bunkerweb-scheduler:1.6.5-rc3
|
||||
environment:
|
||||
<<: *bw-env
|
||||
BUNKERWEB_INSTANCES: "bunkerweb" # Match the instance service name
|
||||
SERVER_NAME: "www.example.com"
|
||||
MULTISITE: "yes"
|
||||
DATABASE_URI: "mariadb+pymysql://bunkerweb:changeme@bw-db:3306/db"
|
||||
DISABLE_DEFAULT_SERVER: "yes"
|
||||
# Reverse-proxy the API on /api
|
||||
www.example.com_USE_REVERSE_PROXY: "yes"
|
||||
www.example.com_REVERSE_PROXY_URL: "/api"
|
||||
www.example.com_REVERSE_PROXY_HOST: "http://bw-api:8888"
|
||||
volumes:
|
||||
- bw-storage:/data
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- bw-universe
|
||||
- bw-db
|
||||
|
||||
bw-api:
|
||||
image: bunkerity/bunkerweb-api:1.6.5-rc3
|
||||
environment:
|
||||
DATABASE_URI: "mariadb+pymysql://bunkerweb:changeme@bw-db:3306/db" # Use a strong password
|
||||
API_WHITELIST_IPS: "127.0.0.0/8 10.20.30.0/24" # API allowlist
|
||||
API_TOKEN: "secret" # Optional admin override token
|
||||
API_ROOT_PATH: "/api" # Match reverse-proxy path
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- bw-universe
|
||||
- bw-db
|
||||
|
||||
bw-db:
|
||||
image: mariadb:11
|
||||
# Avoid issues with large queries
|
||||
command: --max-allowed-packet=67108864
|
||||
environment:
|
||||
MYSQL_RANDOM_ROOT_PASSWORD: "yes"
|
||||
MYSQL_DATABASE: "db"
|
||||
MYSQL_USER: "bunkerweb"
|
||||
MYSQL_PASSWORD: "changeme" # Use a strong password
|
||||
volumes:
|
||||
- bw-data:/var/lib/mysql
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- bw-db
|
||||
|
||||
volumes:
|
||||
bw-data:
|
||||
bw-storage:
|
||||
|
||||
networks:
|
||||
bw-universe:
|
||||
name: bw-universe
|
||||
ipam:
|
||||
driver: default
|
||||
config:
|
||||
- subnet: 10.20.30.0/24
|
||||
bw-services:
|
||||
name: bw-services
|
||||
bw-db:
|
||||
name: bw-db
|
||||
```
|
||||
|
||||
=== "Docker Autoconf"
|
||||
|
||||
Same as above but leveraging the Autoconf service to discover and configure services automatically. The API is exposed under `/api` using labels on the API container.
|
||||
|
||||
```yaml
|
||||
x-api-env: &api-env
|
||||
AUTOCONF_MODE: "yes"
|
||||
DATABASE_URI: "mariadb+pymysql://bunkerweb:changeme@bw-db:3306/db" # Use a strong password
|
||||
|
||||
services:
|
||||
bunkerweb:
|
||||
image: bunkerity/bunkerweb:1.6.5-rc3
|
||||
ports:
|
||||
- "80:8080/tcp"
|
||||
- "443:8443/tcp"
|
||||
- "443:8443/udp" # QUIC
|
||||
environment:
|
||||
AUTOCONF_MODE: "yes"
|
||||
API_WHITELIST_IP: "127.0.0.0/8 10.20.30.0/24"
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- bw-universe
|
||||
- bw-services
|
||||
|
||||
bw-scheduler:
|
||||
image: bunkerity/bunkerweb-scheduler:1.6.5-rc3
|
||||
environment:
|
||||
<<: *api-env
|
||||
BUNKERWEB_INSTANCES: "" # Discovered by Autoconf
|
||||
SERVER_NAME: "" # Filled via labels
|
||||
MULTISITE: "yes" # Mandatory with Autoconf
|
||||
API_WHITELIST_IP: "127.0.0.0/8 10.20.30.0/24"
|
||||
volumes:
|
||||
- bw-storage:/data
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- bw-universe
|
||||
- bw-db
|
||||
|
||||
bw-autoconf:
|
||||
image: bunkerity/bunkerweb-autoconf:1.6.5-rc3
|
||||
depends_on:
|
||||
- bunkerweb
|
||||
- bw-docker
|
||||
environment:
|
||||
<<: *api-env
|
||||
DOCKER_HOST: "tcp://bw-docker:2375"
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- bw-universe
|
||||
- bw-docker
|
||||
- bw-db
|
||||
|
||||
bw-api:
|
||||
image: bunkerity/bunkerweb-api:1.6.5-rc3
|
||||
environment:
|
||||
<<: *api-env
|
||||
API_WHITELIST_IPS: "127.0.0.0/8 10.20.30.0/24"
|
||||
API_TOKEN: "secret"
|
||||
API_ROOT_PATH: "/api"
|
||||
labels:
|
||||
- "bunkerweb.SERVER_NAME=www.example.com"
|
||||
- "bunkerweb.USE_REVERSE_PROXY=yes"
|
||||
- "bunkerweb.REVERSE_PROXY_URL=/api"
|
||||
- "bunkerweb.REVERSE_PROXY_HOST=http://bw-api:8888"
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- bw-universe
|
||||
- bw-db
|
||||
|
||||
bw-db:
|
||||
image: mariadb:11
|
||||
command: --max-allowed-packet=67108864
|
||||
environment:
|
||||
MYSQL_RANDOM_ROOT_PASSWORD: "yes"
|
||||
MYSQL_DATABASE: "db"
|
||||
MYSQL_USER: "bunkerweb"
|
||||
MYSQL_PASSWORD: "changeme"
|
||||
volumes:
|
||||
- bw-data:/var/lib/mysql
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- bw-db
|
||||
|
||||
bw-docker:
|
||||
image: tecnativa/docker-socket-proxy:nightly
|
||||
environment:
|
||||
CONTAINERS: "1"
|
||||
LOG_LEVEL: "warning"
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- bw-docker
|
||||
|
||||
volumes:
|
||||
bw-data:
|
||||
bw-storage:
|
||||
|
||||
networks:
|
||||
bw-universe:
|
||||
name: bw-universe
|
||||
ipam:
|
||||
driver: default
|
||||
config:
|
||||
- subnet: 10.20.30.0/24
|
||||
bw-services:
|
||||
name: bw-services
|
||||
bw-db:
|
||||
name: bw-db
|
||||
bw-docker:
|
||||
name: bw-docker
|
||||
```
|
||||
|
||||
!!! warning "Reverse proxy path"
|
||||
Keep the API path unguessable and combine with the API allowlist and authentication.
|
||||
|
||||
If you already expose another app on the same server name with a template (e.g. `USE_TEMPLATE`), prefer a separate hostname for the API to avoid conflicts.
|
||||
|
||||
### All‑In‑One
|
||||
|
||||
If you use the All‑In‑One image, the API can be enabled by setting `SERVICE_API=yes`:
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name bunkerweb-aio \
|
||||
-e SERVICE_API=yes \
|
||||
-e API_WHITELIST_IPS="127.0.0.0/8" \
|
||||
-p 80:8080/tcp -p 443:8443/tcp -p 443:8443/udp \
|
||||
bunkerity/bunkerweb-all-in-one:1.6.5-rc3
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
Supported ways to authenticate requests:
|
||||
|
||||
- Basic admin: When credentials belong to an admin API user, protected endpoints accept `Authorization: Basic <base64(username:password)>`.
|
||||
- Admin Bearer override: If `API_TOKEN` is configured, `Authorization: Bearer <API_TOKEN>` grants full access.
|
||||
- Biscuit token (recommended): Obtain a token from `POST /auth` using Basic credentials or a JSON/form body containing `username` and `password`. Use the returned token as `Authorization: Bearer <token>` on subsequent calls.
|
||||
|
||||
Example: get a Biscuit, list instances, then reload all instances.
|
||||
|
||||
```bash
|
||||
# 1) Get a Biscuit token with admin credentials
|
||||
TOKEN=$(curl -s -X POST -u admin:changeme http://api.example.com/auth | jq -r .token)
|
||||
|
||||
# 2) List instances
|
||||
curl -H "Authorization: Bearer $TOKEN" http://api.example.com/instances
|
||||
|
||||
# 3) Reload configuration across all instances (no test)
|
||||
curl -X POST -H "Authorization: Bearer $TOKEN" \
|
||||
"http://api.example.com/instances/reload?test=no"
|
||||
```
|
||||
|
||||
### Biscuit facts and checks
|
||||
|
||||
Tokens embed facts like `user(<username>)`, `client_ip(<ip>)`, `domain(<host>)`, and a coarse role `role("api_user", ["read", "write"])` derived from DB permissions. Admins include `admin(true)` while non‑admins carry fine‑grained facts such as `api_perm(<resource_type>, <resource_id|*>, <permission>)`.
|
||||
|
||||
Authorization maps the route/method to required permissions; `admin(true)` always passes. When fine‑grained facts are absent, the guard falls back to the coarse role: GET/HEAD/OPTIONS require `read`; write verbs require `write`.
|
||||
|
||||
Keys are stored at `/var/lib/bunkerweb/.api_biscuit_private_key` and `/var/lib/bunkerweb/.api_biscuit_public_key`. You can also provide `BISCUIT_PUBLIC_KEY`/`BISCUIT_PRIVATE_KEY` via environment variables; if neither files nor env are set, the API generates a key pair at startup and persists it securely.
|
||||
|
||||
## Permissions (ACL)
|
||||
|
||||
This API supports two authorization layers:
|
||||
|
||||
- Coarse role: Tokens carry `role("api_user", ["read"[, "write"]])` for endpoints without a fine‑grained mapping. Read maps to GET/HEAD/OPTIONS; write maps to POST/PUT/PATCH/DELETE.
|
||||
- Fine‑grained ACL: Tokens embed `api_perm(<resource_type>, <resource_id|*>, <permission>)` and routes declare what they require. `admin(true)` bypasses all checks.
|
||||
|
||||
Supported resource types: `instances`, `global_config`, `services`, `configs`, `plugins`, `cache`, `bans`, `jobs`.
|
||||
|
||||
Permission names by resource type:
|
||||
|
||||
- instances: `instances_read`, `instances_update`, `instances_delete`, `instances_create`, `instances_execute`
|
||||
- global_config: `global_config_read`, `global_config_update`
|
||||
- services: `service_read`, `service_create`, `service_update`, `service_delete`, `service_convert`, `service_export`
|
||||
- configs: `configs_read`, `config_read`, `config_create`, `config_update`, `config_delete`
|
||||
- plugins: `plugin_read`, `plugin_create`, `plugin_delete`
|
||||
- cache: `cache_read`, `cache_delete`
|
||||
- bans: `ban_read`, `ban_update`, `ban_delete`, `ban_created`
|
||||
- jobs: `job_read`, `job_run`
|
||||
|
||||
Resource IDs: For fine‑grained checks, the second path segment is treated as `resource_id` when meaningful. Examples: `/services/{service}` -> `{service}`; `/configs/{service}/...` -> `{service}`. Use `"*"` (or omit) to grant globally for a resource type.
|
||||
|
||||
User and ACL configuration:
|
||||
|
||||
- Admin user: Set `API_USERNAME` and `API_PASSWORD` to create the first admin at startup. To rotate creds later, set `OVERRIDE_API_CREDS=yes` (or ensure the admin was created with method `manual`). Only one admin exists; additional attempts fall back to non‑admin creation.
|
||||
- Non‑admin users and grants: Provide `API_ACL_BOOTSTRAP_FILE` pointing to a JSON file, or mount `/var/lib/bunkerweb/api_acl_bootstrap.json`. The API reads it at startup to create/update users and permissions.
|
||||
- ACL cache file: A read‑only summary is written at `/var/lib/bunkerweb/api_acl.json` at startup for introspection; authorization evaluates DB‑backed grants baked into the Biscuit token.
|
||||
|
||||
Bootstrap JSON examples (both forms supported):
|
||||
|
||||
```json
|
||||
{
|
||||
"users": {
|
||||
"ci": {
|
||||
"admin": false,
|
||||
"password": "Str0ng&P@ss!",
|
||||
"permissions": {
|
||||
"services": {
|
||||
"*": { "service_read": true },
|
||||
"app-frontend": { "service_update": true, "service_delete": false }
|
||||
},
|
||||
"configs": {
|
||||
"app-frontend": { "config_read": true, "config_update": true }
|
||||
}
|
||||
}
|
||||
},
|
||||
"ops": {
|
||||
"admin": false,
|
||||
"password_hash": "$2b$13$...bcrypt-hash...",
|
||||
"permissions": {
|
||||
"instances": { "*": { "instances_execute": true } },
|
||||
"jobs": { "*": { "job_run": true } }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Or list form:
|
||||
|
||||
```json
|
||||
{
|
||||
"users": [
|
||||
{
|
||||
"username": "ci",
|
||||
"password": "Str0ng&P@ss!",
|
||||
"permissions": [
|
||||
{ "resource_type": "services", "resource_id": "*", "permission": "service_read" },
|
||||
{ "resource_type": "services", "resource_id": "app-frontend", "permission": "service_update" }
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Notes:
|
||||
|
||||
- Passwords may be plaintext (`password`) or bcrypt (`password_hash` / `password_bcrypt`). Weak plaintext passwords are rejected in non‑debug builds; if missing, a random one is generated and a warning is logged.
|
||||
- `resource_id: "*"` (or null/empty) grants globally on that resource type.
|
||||
- Existing users can have passwords updated and additional grants applied via bootstrap.
|
||||
|
||||
## Feature reference
|
||||
|
||||
The API is organised by resource-focused routers. Use the sections below as a capability map; the interactive schema at `/docs` documents request/response models in detail.
|
||||
|
||||
### Core and authentication
|
||||
|
||||
- `GET /ping`, `GET /health`: lightweight liveness probes for the API service itself.
|
||||
- `POST /auth`: exchange Basic credentials (or the admin override token) for a Biscuit. Accepts JSON, form, or `Authorization` headers. Admins may also continue using HTTP Basic directly on protected routes when desired.
|
||||
|
||||
### Instances control plane
|
||||
|
||||
- `GET /instances`: list registered instances, including creation/last-seen timestamps, registration method, and metadata.
|
||||
- `POST /instances`: register a new API-managed instance (hostname, optional port, server name, friendly name, method).
|
||||
- `GET /instances/{hostname}` / `PATCH /instances/{hostname}` / `DELETE /instances/{hostname}`: inspect, update mutable fields, or remove API-managed instances.
|
||||
- `DELETE /instances`: bulk removal; skips non-API instances and reports them in `skipped`.
|
||||
- `GET /instances/ping` and `GET /instances/{hostname}/ping`: health checks across all or individual instances.
|
||||
- `POST /instances/reload?test=yes|no`, `POST /instances/{hostname}/reload`: trigger configuration reloads (test mode performs dry-run validation).
|
||||
- `POST /instances/stop`, `POST /instances/{hostname}/stop`: relay stop commands to instances.
|
||||
|
||||
### Global configuration
|
||||
|
||||
- `GET /global_config`: fetch non-default settings (use `full=true` for the entire config, `methods=true` to include provenance).
|
||||
- `PATCH /global_config`: upsert API-owned (`method="api"`) global settings; validation errors call out unknown or read-only keys.
|
||||
|
||||
### Service lifecycle
|
||||
|
||||
- `GET /services`: enumerate services with metadata, including draft status and timestamps.
|
||||
- `GET /services/{service}`: retrieve non-default overlays (`full=false`) or the full config snapshot (`full=true`) for a service.
|
||||
- `POST /services`: create services, optionally as draft, and seed prefixed variables (`{service}_{KEY}`). Updates the `SERVER_NAME` roster atomically.
|
||||
- `PATCH /services/{service}`: rename services, toggle draft flags, and update prefixed variables. Ignores direct edits to `SERVER_NAME` within `variables` for safety.
|
||||
- `DELETE /services/{service}`: remove a service and its derived configuration keys.
|
||||
- `POST /services/{service}/convert?convert_to=online|draft`: quickly switch draft/online state without altering other variables.
|
||||
|
||||
### Custom configuration snippets
|
||||
|
||||
- `GET /configs`: list custom config fragments (HTTP/server/stream/ModSecurity/CRS hooks) for a service (`service=global` by default). `with_data=true` embeds UTF-8 content when printable.
|
||||
- `POST /configs` and `POST /configs/upload`: create new snippets from JSON payloads or uploaded files. Accepted types include `http`, `server_http`, `default_server_http`, `modsec`, `modsec_crs`, `stream`, `server_stream`, and CRS plugin hooks. Names must match `^[\w_-]{1,64}$`.
|
||||
- `GET /configs/{service}/{type}/{name}`: retrieve a snippet with optional content (`with_data=true`).
|
||||
- `PATCH /configs/{service}/{type}/{name}` and `PATCH .../upload`: update or move API-managed snippets; template- or file-managed entries stay read-only.
|
||||
- `DELETE /configs` and `DELETE /configs/{service}/{type}/{name}`: prune API-managed snippets while preserving template-managed ones, returning a `skipped` list for ignored entries.
|
||||
|
||||
### Ban orchestration
|
||||
|
||||
- `GET /bans`: aggregate active bans reported by all instances.
|
||||
- `POST /bans` or `POST /bans/ban`: apply one or multiple bans. Payloads may be JSON objects, arrays, or stringified JSON. `service` is optional; when omitted the ban is global.
|
||||
- `POST /bans/unban` or `DELETE /bans`: remove bans globally or per service using the same flexible payloads.
|
||||
|
||||
### Plugin management
|
||||
|
||||
- `GET /plugins?type=all|external|ui|pro`: list plugins with metadata; `with_data=true` includes packaged bytes when available.
|
||||
- `POST /plugins/upload`: install UI plugins from `.zip`, `.tar.gz`, or `.tar.xz` archives. Archives may bundle multiple plugins as long as each contains a `plugin.json`.
|
||||
- `DELETE /plugins/{id}`: remove a UI plugin by ID (`^[\w.-]{4,64}$`).
|
||||
|
||||
### Job cache and execution
|
||||
|
||||
- `GET /cache`: list cached artifacts produced by scheduler jobs, filtered by service, plugin ID, or job name. `with_data=true` includes printable file content.
|
||||
- `GET /cache/{service}/{plugin}/{job}/{file}`: fetch a specific cache file (`download=true` streams an attachment).
|
||||
- `DELETE /cache` or `DELETE /cache/{service}/{plugin}/{job}/{file}`: delete cache files and notify the scheduler about affected plugins.
|
||||
- `GET /jobs`: inspect known jobs, their schedule metadata, and cache summaries.
|
||||
- `POST /jobs/run`: request job execution by flagging the associated plugin(s) as changed.
|
||||
|
||||
### Operational notes
|
||||
|
||||
- Write endpoints persist to the shared database; instances pick up changes via scheduler sync or after a `/instances/reload`.
|
||||
- Errors are normalised to `{ "status": "error", "message": "..." }` with appropriate HTTP status codes (422 validation, 404 not found, 403 ACL, 5xx upstream failures).
|
||||
|
||||
## Rate limiting
|
||||
|
||||
Per‑client rate limiting is handled by SlowAPI. Enable/disable it and shape limits via environment variables or `/etc/bunkerweb/api.yml`.
|
||||
|
||||
- `API_RATE_LIMIT_ENABLED` (default: `yes`)
|
||||
- Default limit: `API_RATE_LIMIT_TIMES` per `API_RATE_LIMIT_SECONDS` (e.g. `100` per `60`)
|
||||
- `API_RATE_LIMIT_RULES`: inline JSON/CSV, or a path to a YAML/JSON file with per‑route rules
|
||||
- Storage backend: in‑memory or Redis/Valkey when `USE_REDIS=yes` and `REDIS_*` variables are provided (Sentinel supported)
|
||||
- Headers: `API_RATE_LIMIT_HEADERS_ENABLED` (default: `yes`)
|
||||
|
||||
Example YAML (mounted at `/etc/bunkerweb/api.yml`):
|
||||
|
||||
```yaml
|
||||
API_RATE_LIMIT_ENABLED: yes
|
||||
API_RATE_LIMIT_DEFAULTS: ["200/minute"]
|
||||
API_RATE_LIMIT_RULES:
|
||||
- path: "/auth"
|
||||
methods: "POST"
|
||||
times: 10
|
||||
seconds: 60
|
||||
- path: "/instances*"
|
||||
methods: "GET|POST"
|
||||
times: 100
|
||||
seconds: 60
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
You can configure the API via environment variables, Docker secrets, and the optional `/etc/bunkerweb/api.yml` or `/etc/bunkerweb/api.env` files. Key settings:
|
||||
|
||||
- Docs & schema: `API_DOCS_URL`, `API_REDOC_URL`, `API_OPENAPI_URL`, `API_ROOT_PATH`.
|
||||
- Auth basics: `API_TOKEN` (admin override Bearer), `API_USERNAME`/`API_PASSWORD` (create/update admin), `OVERRIDE_API_CREDS`.
|
||||
- ACL and users: `API_ACL_BOOTSTRAP_FILE` (JSON path).
|
||||
- Biscuit policy: `API_BISCUIT_TTL_SECONDS` (0/off disables TTL), `CHECK_PRIVATE_IP` (bind token to client IP unless private).
|
||||
- IP allowlist: `API_WHITELIST_ENABLED`, `API_WHITELIST_IPS`.
|
||||
- Rate limiting (core): `API_RATE_LIMIT_ENABLED`, `API_RATE_LIMIT_TIMES`, `API_RATE_LIMIT_SECONDS`, `API_RATE_LIMIT_HEADERS_ENABLED`.
|
||||
- Rate limiting (advanced): `API_RATE_LIMIT_AUTH_TIMES`, `API_RATE_LIMIT_AUTH_SECONDS`, `API_RATE_LIMIT_RULES`, `API_RATE_LIMIT_DEFAULTS`, `API_RATE_LIMIT_APPLICATION_LIMITS`, `API_RATE_LIMIT_STRATEGY`, `API_RATE_LIMIT_KEY`, `API_RATE_LIMIT_EXEMPT_IPS`, `API_RATE_LIMIT_STORAGE_OPTIONS`.
|
||||
- Rate limiting storage: in‑memory or Redis/Valkey when `USE_REDIS=yes` and Redis settings like `REDIS_HOST`, `REDIS_PORT`, `REDIS_PASSWORD`, `REDIS_DATABASE`, `REDIS_SSL`, or Sentinel variables are set. See the Redis settings table in `docs/features.md`.
|
||||
- Network/TLS: `API_LISTEN_ADDR`, `API_LISTEN_PORT`, `API_FORWARDED_ALLOW_IPS`, `API_SSL_ENABLED`, `API_SSL_CERTFILE`, `API_SSL_KEYFILE`, `API_SSL_CA_CERTS`.
|
||||
|
||||
### How configuration is loaded
|
||||
|
||||
Precedence from highest to lowest:
|
||||
|
||||
- Environment variables (e.g. container `environment:` or exported shell vars)
|
||||
- Secrets files under `/run/secrets` (Docker/K8s secrets; filenames match variable names)
|
||||
- YAML file at `/etc/bunkerweb/api.yml`
|
||||
- Env file at `/etc/bunkerweb/api.env` (key=value lines)
|
||||
- Built‑in defaults
|
||||
|
||||
Notes:
|
||||
|
||||
- YAML supports inlining secret files with `<file:relative/path>`; the path is resolved against `/run/secrets`.
|
||||
- Set doc URLs to `off`/`disabled`/`none` to disable endpoints (e.g. `API_DOCS_URL=off`).
|
||||
- If `API_SSL_ENABLED=yes`, you must also set `API_SSL_CERTFILE` and `API_SSL_KEYFILE`.
|
||||
- If Redis is enabled (`USE_REDIS=yes`), provide Redis details; see Redis section in `docs/features.md`.
|
||||
|
||||
### Authentication and users
|
||||
|
||||
- Admin bootstrap: set `API_USERNAME` and `API_PASSWORD` to create the first admin. To re‑apply later, set `OVERRIDE_API_CREDS=yes`.
|
||||
- Non‑admins and permissions: provide `API_ACL_BOOTSTRAP_FILE` with a JSON path (or mount to `/var/lib/bunkerweb/api_acl_bootstrap.json`). The file may list users and fine‑grained grants.
|
||||
- Biscuit keys: either set `BISCUIT_PUBLIC_KEY`/`BISCUIT_PRIVATE_KEY` or mount files at `/var/lib/bunkerweb/.api_biscuit_public_key` and `/var/lib/bunkerweb/.api_biscuit_private_key`. If none are provided, the API generates and persists a key pair at startup.
|
||||
|
||||
### TLS and networking
|
||||
|
||||
- Bind address/port: `API_LISTEN_ADDR` (default `0.0.0.0`), `API_LISTEN_PORT` (default `8888`).
|
||||
- Reverse proxies: set `API_FORWARDED_ALLOW_IPS` to the proxy IPs so Gunicorn trusts `X‑Forwarded‑*` headers.
|
||||
- TLS termination in the API: `API_SSL_ENABLED=yes` plus `API_SSL_CERTFILE` and `API_SSL_KEYFILE`; optional `API_SSL_CA_CERTS`
|
||||
|
||||
### Rate limiting quick recipes
|
||||
|
||||
- Disable globally: `API_RATE_LIMIT_ENABLED=no`
|
||||
- Set a simple global limit: `API_RATE_LIMIT_TIMES=100`, `API_RATE_LIMIT_SECONDS=60`
|
||||
- Per‑route rules: set `API_RATE_LIMIT_RULES` to a JSON/YAML file path or inline YAML in `/etc/bunkerweb/api.yml`.
|
||||
|
||||
!!! warning "Startup safety"
|
||||
The API exits if there is no authentication path configured (no Biscuit keys, no admin user, and no `API_TOKEN`). Ensure at least one method is set before starting.
|
||||
|
||||
Startup safety: The API exits if no authentication path is available (no Biscuit keys, no admin API user, and no `API_TOKEN`). Ensure at least one is configured.
|
||||
|
||||
!!! info "Root path and proxies"
|
||||
If you deploy the API behind BunkerWeb on a sub‑path, set `API_ROOT_PATH` to that path so `/docs` and relative routes work correctly when proxied.
|
||||
|
||||
## Operations
|
||||
|
||||
- Health: `GET /health` returns `{"status":"ok"}` when the service is up.
|
||||
- Linux service: a `systemd` unit named `bunkerweb-api.service` is packaged. Customize via `/etc/bunkerweb/api.env` and manage with `systemctl`.
|
||||
- Startup safety: the API fails fast when no authentication path is available (no Biscuit keys, no admin user, no `API_TOKEN`). Errors are written to `/var/tmp/bunkerweb/api.error`.
|
||||
File diff suppressed because one or more lines are too long
|
Before Width: | Height: | Size: 980 KiB After Width: | Height: | Size: 1.3 MiB |
|
|
@ -45,15 +45,73 @@ By default, the container exposes:
|
|||
- 8443/tcp for HTTPS
|
||||
- 8443/udp for QUIC
|
||||
- 7000/tcp for the web UI access without BunkerWeb in front (not recommended for production)
|
||||
- 8888/tcp for the API when `SERVICE_API=yes` (internal use; prefer exposing it through BunkerWeb as a reverse proxy rather than publishing directly)
|
||||
|
||||
The All-In-One image comes with several built-in services, which can be controlled using environment variables:
|
||||
|
||||
- `SERVICE_UI=yes` (default) - Enables the web UI service
|
||||
- `SERVICE_SCHEDULER=yes` (default) - Enables the Scheduler service
|
||||
- `SERVICE_API=no` (default) - Enables the API service (FastAPI control plane)
|
||||
- `AUTOCONF_MODE=no` (default) - Enables the autoconf service
|
||||
- `USE_REDIS=yes` (default) - Enables the built-in [Redis](#redis-integration) instance
|
||||
- `USE_CROWDSEC=no` (default) - [CrowdSec](#crowdsec-integration) integration is disabled by default
|
||||
|
||||
### API Integration
|
||||
|
||||
The All-In-One image embeds the BunkerWeb API. It is disabled by default and can be enabled by setting `SERVICE_API=yes`.
|
||||
|
||||
!!! warning "Security"
|
||||
The API is a privileged control plane. Do not expose it directly to the Internet. Keep it on an internal network, restrict source IPs with `API_WHITELIST_IPS`, require authentication (`API_TOKEN` or API users + Biscuit), and preferably access it through BunkerWeb as a reverse proxy on an unguessable path.
|
||||
|
||||
Quick enable (standalone) — publishes the API port; for testing only:
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name bunkerweb-aio \
|
||||
-v bw-storage:/data \
|
||||
-e SERVICE_API=yes \
|
||||
-e API_WHITELIST_IPS="127.0.0.0/8" \
|
||||
-e API_TOKEN="changeme" \
|
||||
-p 80:8080/tcp -p 443:8443/tcp -p 443:8443/udp \
|
||||
-p 8888:8888/tcp \
|
||||
bunkerity/bunkerweb-all-in-one:1.6.5-rc3
|
||||
```
|
||||
|
||||
Recommended (behind BunkerWeb) — do not publish `8888`; reverse‑proxy it instead:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
bunkerweb:
|
||||
image: bunkerity/bunkerweb:1.6.5-rc3
|
||||
ports:
|
||||
- "80:8080/tcp"
|
||||
- "443:8443/tcp"
|
||||
- "443:8443/udp"
|
||||
environment:
|
||||
SERVER_NAME: "www.example.com"
|
||||
MULTISITE: "yes"
|
||||
DISABLE_DEFAULT_SERVER: "yes"
|
||||
USE_REVERSE_PROXY: "yes"
|
||||
REVERSE_PROXY_URL: "/api-<unguessable>"
|
||||
REVERSE_PROXY_HOST: "http://bunkerweb-aio:8888"
|
||||
|
||||
bunkerweb-aio:
|
||||
image: bunkerity/bunkerweb-all-in-one:1.6.5-rc3
|
||||
environment:
|
||||
SERVICE_API: "yes"
|
||||
API_WHITELIST_IPS: "127.0.0.0/8 10.20.30.0/24"
|
||||
# Optionally set an admin override token
|
||||
# API_TOKEN: "changeme"
|
||||
networks:
|
||||
- bw-universe
|
||||
|
||||
networks:
|
||||
bw-universe:
|
||||
name: bw-universe
|
||||
```
|
||||
|
||||
Details about authentication, permissions (ACL), rate limiting, TLS, and configuration options are available in the [API documentation](api.md).
|
||||
|
||||
### Accessing the Setup wizard
|
||||
|
||||
By default, the setup wizard is automagically launched when you run the AIO container for the first time. To access it, follow these steps:
|
||||
|
|
@ -596,6 +654,7 @@ When run without any options, the script enters an interactive mode that guides
|
|||
2. **Setup Wizard**: Choose whether to enable the web-based configuration wizard. This is highly recommended for first-time users.
|
||||
3. **CrowdSec Integration**: Opt-in to install the CrowdSec security engine for advanced, real-time threat protection.
|
||||
4. **CrowdSec AppSec**: If you choose to install CrowdSec, you can also enable the Application Security (AppSec) component, which adds WAF capabilities.
|
||||
5. **API Service**: Choose whether to enable the optional BunkerWeb API service. It is disabled by default on Linux installations.
|
||||
|
||||
!!! info "Manager and Scheduler installations"
|
||||
If you choose the **Manager** or **Scheduler Only** installation type, you will also be prompted to provide the IP addresses or hostnames of your BunkerWeb worker instances.
|
||||
|
|
@ -614,6 +673,8 @@ For non-interactive or automated setups, the script can be controlled with comma
|
|||
| `-y, --yes` | Runs in non-interactive mode using default answers for all prompts. |
|
||||
| `-f, --force` | Forces the installation to proceed even on an unsupported OS version. |
|
||||
| `-q, --quiet` | Silent installation (suppress output). |
|
||||
| `--api`, `--enable-api` | Enables the API (FastAPI) systemd service (disabled by default). |
|
||||
| `--no-api` | Explicitly disables the API service. |
|
||||
| `-h, --help` | Displays the help message with all available options. |
|
||||
| `--dry-run` | Show what would be installed without doing it. |
|
||||
|
||||
|
|
@ -668,6 +729,9 @@ sudo ./install-bunkerweb.sh --quiet --yes
|
|||
# Preview installation without executing
|
||||
sudo ./install-bunkerweb.sh --dry-run
|
||||
|
||||
# Enable the API during easy install (non-interactive)
|
||||
sudo ./install-bunkerweb.sh --yes --api
|
||||
|
||||
# Error: CrowdSec cannot be used with worker installations
|
||||
# sudo ./install-bunkerweb.sh --worker --crowdsec # This will fail
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
mike==2.1.3
|
||||
mkdocs-material[imaging]==9.6.18
|
||||
mkdocs-material[imaging]==9.6.20
|
||||
mkdocs-print-site-plugin==2.8
|
||||
mkdocs-static-i18n==1.3.0
|
||||
pytablewriter==1.2.1
|
||||
|
|
|
|||
|
|
@ -29,74 +29,91 @@ certifi==2025.8.3 \
|
|||
--hash=sha256:e564105f78ded564e3ae7c923924435e1daa7463faeab5bb932bc53ffae63407 \
|
||||
--hash=sha256:f6c12493cfb1b06ba2ff328595af9350c65d6644968e5d3a2ffd78699af217a5
|
||||
# via requests
|
||||
cffi==1.17.1 \
|
||||
--hash=sha256:045d61c734659cc045141be4bae381a41d89b741f795af1dd018bfb532fd0df8 \
|
||||
--hash=sha256:0984a4925a435b1da406122d4d7968dd861c1385afe3b45ba82b750f229811e2 \
|
||||
--hash=sha256:0e2b1fac190ae3ebfe37b979cc1ce69c81f4e4fe5746bb401dca63a9062cdaf1 \
|
||||
--hash=sha256:0f048dcf80db46f0098ccac01132761580d28e28bc0f78ae0d58048063317e15 \
|
||||
--hash=sha256:1257bdabf294dceb59f5e70c64a3e2f462c30c7ad68092d01bbbfb1c16b1ba36 \
|
||||
--hash=sha256:1c39c6016c32bc48dd54561950ebd6836e1670f2ae46128f67cf49e789c52824 \
|
||||
--hash=sha256:1d599671f396c4723d016dbddb72fe8e0397082b0a77a4fab8028923bec050e8 \
|
||||
--hash=sha256:28b16024becceed8c6dfbc75629e27788d8a3f9030691a1dbf9821a128b22c36 \
|
||||
--hash=sha256:2bb1a08b8008b281856e5971307cc386a8e9c5b625ac297e853d36da6efe9c17 \
|
||||
--hash=sha256:30c5e0cb5ae493c04c8b42916e52ca38079f1b235c2f8ae5f4527b963c401caf \
|
||||
--hash=sha256:31000ec67d4221a71bd3f67df918b1f88f676f1c3b535a7eb473255fdc0b83fc \
|
||||
--hash=sha256:386c8bf53c502fff58903061338ce4f4950cbdcb23e2902d86c0f722b786bbe3 \
|
||||
--hash=sha256:3edc8d958eb099c634dace3c7e16560ae474aa3803a5df240542b305d14e14ed \
|
||||
--hash=sha256:45398b671ac6d70e67da8e4224a065cec6a93541bb7aebe1b198a61b58c7b702 \
|
||||
--hash=sha256:46bf43160c1a35f7ec506d254e5c890f3c03648a4dbac12d624e4490a7046cd1 \
|
||||
--hash=sha256:4ceb10419a9adf4460ea14cfd6bc43d08701f0835e979bf821052f1805850fe8 \
|
||||
--hash=sha256:51392eae71afec0d0c8fb1a53b204dbb3bcabcb3c9b807eedf3e1e6ccf2de903 \
|
||||
--hash=sha256:5da5719280082ac6bd9aa7becb3938dc9f9cbd57fac7d2871717b1feb0902ab6 \
|
||||
--hash=sha256:610faea79c43e44c71e1ec53a554553fa22321b65fae24889706c0a84d4ad86d \
|
||||
--hash=sha256:636062ea65bd0195bc012fea9321aca499c0504409f413dc88af450b57ffd03b \
|
||||
--hash=sha256:6883e737d7d9e4899a8a695e00ec36bd4e5e4f18fabe0aca0efe0a4b44cdb13e \
|
||||
--hash=sha256:6b8b4a92e1c65048ff98cfe1f735ef8f1ceb72e3d5f0c25fdb12087a23da22be \
|
||||
--hash=sha256:6f17be4345073b0a7b8ea599688f692ac3ef23ce28e5df79c04de519dbc4912c \
|
||||
--hash=sha256:706510fe141c86a69c8ddc029c7910003a17353970cff3b904ff0686a5927683 \
|
||||
--hash=sha256:72e72408cad3d5419375fc87d289076ee319835bdfa2caad331e377589aebba9 \
|
||||
--hash=sha256:733e99bc2df47476e3848417c5a4540522f234dfd4ef3ab7fafdf555b082ec0c \
|
||||
--hash=sha256:7596d6620d3fa590f677e9ee430df2958d2d6d6de2feeae5b20e82c00b76fbf8 \
|
||||
--hash=sha256:78122be759c3f8a014ce010908ae03364d00a1f81ab5c7f4a7a5120607ea56e1 \
|
||||
--hash=sha256:805b4371bf7197c329fcb3ead37e710d1bca9da5d583f5073b799d5c5bd1eee4 \
|
||||
--hash=sha256:85a950a4ac9c359340d5963966e3e0a94a676bd6245a4b55bc43949eee26a655 \
|
||||
--hash=sha256:8f2cdc858323644ab277e9bb925ad72ae0e67f69e804f4898c070998d50b1a67 \
|
||||
--hash=sha256:9755e4345d1ec879e3849e62222a18c7174d65a6a92d5b346b1863912168b595 \
|
||||
--hash=sha256:98e3969bcff97cae1b2def8ba499ea3d6f31ddfdb7635374834cf89a1a08ecf0 \
|
||||
--hash=sha256:a08d7e755f8ed21095a310a693525137cfe756ce62d066e53f502a83dc550f65 \
|
||||
--hash=sha256:a1ed2dd2972641495a3ec98445e09766f077aee98a1c896dcb4ad0d303628e41 \
|
||||
--hash=sha256:a24ed04c8ffd54b0729c07cee15a81d964e6fee0e3d4d342a27b020d22959dc6 \
|
||||
--hash=sha256:a45e3c6913c5b87b3ff120dcdc03f6131fa0065027d0ed7ee6190736a74cd401 \
|
||||
--hash=sha256:a9b15d491f3ad5d692e11f6b71f7857e7835eb677955c00cc0aefcd0669adaf6 \
|
||||
--hash=sha256:ad9413ccdeda48c5afdae7e4fa2192157e991ff761e7ab8fdd8926f40b160cc3 \
|
||||
--hash=sha256:b2ab587605f4ba0bf81dc0cb08a41bd1c0a5906bd59243d56bad7668a6fc6c16 \
|
||||
--hash=sha256:b62ce867176a75d03a665bad002af8e6d54644fad99a3c70905c543130e39d93 \
|
||||
--hash=sha256:c03e868a0b3bc35839ba98e74211ed2b05d2119be4e8a0f224fba9384f1fe02e \
|
||||
--hash=sha256:c59d6e989d07460165cc5ad3c61f9fd8f1b4796eacbd81cee78957842b834af4 \
|
||||
--hash=sha256:c7eac2ef9b63c79431bc4b25f1cd649d7f061a28808cbc6c47b534bd789ef964 \
|
||||
--hash=sha256:c9c3d058ebabb74db66e431095118094d06abf53284d9c81f27300d0e0d8bc7c \
|
||||
--hash=sha256:ca74b8dbe6e8e8263c0ffd60277de77dcee6c837a3d0881d8c1ead7268c9e576 \
|
||||
--hash=sha256:caaf0640ef5f5517f49bc275eca1406b0ffa6aa184892812030f04c2abf589a0 \
|
||||
--hash=sha256:cdf5ce3acdfd1661132f2a9c19cac174758dc2352bfe37d98aa7512c6b7178b3 \
|
||||
--hash=sha256:d016c76bdd850f3c626af19b0542c9677ba156e4ee4fccfdd7848803533ef662 \
|
||||
--hash=sha256:d01b12eeeb4427d3110de311e1774046ad344f5b1a7403101878976ecd7a10f3 \
|
||||
--hash=sha256:d63afe322132c194cf832bfec0dc69a99fb9bb6bbd550f161a49e9e855cc78ff \
|
||||
--hash=sha256:da95af8214998d77a98cc14e3a3bd00aa191526343078b530ceb0bd710fb48a5 \
|
||||
--hash=sha256:dd398dbc6773384a17fe0d3e7eeb8d1a21c2200473ee6806bb5e6a8e62bb73dd \
|
||||
--hash=sha256:de2ea4b5833625383e464549fec1bc395c1bdeeb5f25c4a3a82b5a8c756ec22f \
|
||||
--hash=sha256:de55b766c7aa2e2a3092c51e0483d700341182f08e67c63630d5b6f200bb28e5 \
|
||||
--hash=sha256:df8b1c11f177bc2313ec4b2d46baec87a5f3e71fc8b45dab2ee7cae86d9aba14 \
|
||||
--hash=sha256:e03eab0a8677fa80d646b5ddece1cbeaf556c313dcfac435ba11f107ba117b5d \
|
||||
--hash=sha256:e221cf152cff04059d011ee126477f0d9588303eb57e88923578ace7baad17f9 \
|
||||
--hash=sha256:e31ae45bc2e29f6b2abd0de1cc3b9d5205aa847cafaecb8af1476a609a2f6eb7 \
|
||||
--hash=sha256:edae79245293e15384b51f88b00613ba9f7198016a5948b5dddf4917d4d26382 \
|
||||
--hash=sha256:f1e22e8c4419538cb197e4dd60acc919d7696e5ef98ee4da4e01d3f8cfa4cc5a \
|
||||
--hash=sha256:f3a2b4222ce6b60e2e8b337bb9596923045681d71e5a082783484d845390938e \
|
||||
--hash=sha256:f6a16c31041f09ead72d69f583767292f750d24913dadacf5756b966aacb3f1a \
|
||||
--hash=sha256:f75c7ab1f9e4aca5414ed4d8e5c0e303a34f4421f8a0d47a4d019ceff0ab6af4 \
|
||||
--hash=sha256:f79fc4fc25f1c8698ff97788206bb3c2598949bfe0fef03d299eb1b5356ada99 \
|
||||
--hash=sha256:f7f5baafcc48261359e14bcd6d9bff6d4b28d9103847c9e136694cb0501aef87 \
|
||||
--hash=sha256:fc48c783f9c87e60831201f2cce7f3b2e4846bf4d8728eabe54d60700b318a0b
|
||||
cffi==2.0.0 \
|
||||
--hash=sha256:00bdf7acc5f795150faa6957054fbbca2439db2f775ce831222b66f192f03beb \
|
||||
--hash=sha256:07b271772c100085dd28b74fa0cd81c8fb1a3ba18b21e03d7c27f3436a10606b \
|
||||
--hash=sha256:087067fa8953339c723661eda6b54bc98c5625757ea62e95eb4898ad5e776e9f \
|
||||
--hash=sha256:0a1527a803f0a659de1af2e1fd700213caba79377e27e4693648c2923da066f9 \
|
||||
--hash=sha256:0cf2d91ecc3fcc0625c2c530fe004f82c110405f101548512cce44322fa8ac44 \
|
||||
--hash=sha256:0f6084a0ea23d05d20c3edcda20c3d006f9b6f3fefeac38f59262e10cef47ee2 \
|
||||
--hash=sha256:12873ca6cb9b0f0d3a0da705d6086fe911591737a59f28b7936bdfed27c0d47c \
|
||||
--hash=sha256:19f705ada2530c1167abacb171925dd886168931e0a7b78f5bffcae5c6b5be75 \
|
||||
--hash=sha256:1cd13c99ce269b3ed80b417dcd591415d3372bcac067009b6e0f59c7d4015e65 \
|
||||
--hash=sha256:1e3a615586f05fc4065a8b22b8152f0c1b00cdbc60596d187c2a74f9e3036e4e \
|
||||
--hash=sha256:1f72fb8906754ac8a2cc3f9f5aaa298070652a0ffae577e0ea9bd480dc3c931a \
|
||||
--hash=sha256:1fc9ea04857caf665289b7a75923f2c6ed559b8298a1b8c49e59f7dd95c8481e \
|
||||
--hash=sha256:203a48d1fb583fc7d78a4c6655692963b860a417c0528492a6bc21f1aaefab25 \
|
||||
--hash=sha256:2081580ebb843f759b9f617314a24ed5738c51d2aee65d31e02f6f7a2b97707a \
|
||||
--hash=sha256:21d1152871b019407d8ac3985f6775c079416c282e431a4da6afe7aefd2bccbe \
|
||||
--hash=sha256:24b6f81f1983e6df8db3adc38562c83f7d4a0c36162885ec7f7b77c7dcbec97b \
|
||||
--hash=sha256:256f80b80ca3853f90c21b23ee78cd008713787b1b1e93eae9f3d6a7134abd91 \
|
||||
--hash=sha256:28a3a209b96630bca57cce802da70c266eb08c6e97e5afd61a75611ee6c64592 \
|
||||
--hash=sha256:2c8f814d84194c9ea681642fd164267891702542f028a15fc97d4674b6206187 \
|
||||
--hash=sha256:2de9a304e27f7596cd03d16f1b7c72219bd944e99cc52b84d0145aefb07cbd3c \
|
||||
--hash=sha256:38100abb9d1b1435bc4cc340bb4489635dc2f0da7456590877030c9b3d40b0c1 \
|
||||
--hash=sha256:3925dd22fa2b7699ed2617149842d2e6adde22b262fcbfada50e3d195e4b3a94 \
|
||||
--hash=sha256:3e17ed538242334bf70832644a32a7aae3d83b57567f9fd60a26257e992b79ba \
|
||||
--hash=sha256:3e837e369566884707ddaf85fc1744b47575005c0a229de3327f8f9a20f4efeb \
|
||||
--hash=sha256:3f4d46d8b35698056ec29bca21546e1551a205058ae1a181d871e278b0b28165 \
|
||||
--hash=sha256:44d1b5909021139fe36001ae048dbdde8214afa20200eda0f64c068cac5d5529 \
|
||||
--hash=sha256:45d5e886156860dc35862657e1494b9bae8dfa63bf56796f2fb56e1679fc0bca \
|
||||
--hash=sha256:4647afc2f90d1ddd33441e5b0e85b16b12ddec4fca55f0d9671fef036ecca27c \
|
||||
--hash=sha256:4671d9dd5ec934cb9a73e7ee9676f9362aba54f7f34910956b84d727b0d73fb6 \
|
||||
--hash=sha256:53f77cbe57044e88bbd5ed26ac1d0514d2acf0591dd6bb02a3ae37f76811b80c \
|
||||
--hash=sha256:5eda85d6d1879e692d546a078b44251cdd08dd1cfb98dfb77b670c97cee49ea0 \
|
||||
--hash=sha256:5fed36fccc0612a53f1d4d9a816b50a36702c28a2aa880cb8a122b3466638743 \
|
||||
--hash=sha256:61d028e90346df14fedc3d1e5441df818d095f3b87d286825dfcbd6459b7ef63 \
|
||||
--hash=sha256:66f011380d0e49ed280c789fbd08ff0d40968ee7b665575489afa95c98196ab5 \
|
||||
--hash=sha256:6824f87845e3396029f3820c206e459ccc91760e8fa24422f8b0c3d1731cbec5 \
|
||||
--hash=sha256:6c6c373cfc5c83a975506110d17457138c8c63016b563cc9ed6e056a82f13ce4 \
|
||||
--hash=sha256:6d02d6655b0e54f54c4ef0b94eb6be0607b70853c45ce98bd278dc7de718be5d \
|
||||
--hash=sha256:6d50360be4546678fc1b79ffe7a66265e28667840010348dd69a314145807a1b \
|
||||
--hash=sha256:730cacb21e1bdff3ce90babf007d0a0917cc3e6492f336c2f0134101e0944f93 \
|
||||
--hash=sha256:737fe7d37e1a1bffe70bd5754ea763a62a066dc5913ca57e957824b72a85e205 \
|
||||
--hash=sha256:74a03b9698e198d47562765773b4a8309919089150a0bb17d829ad7b44b60d27 \
|
||||
--hash=sha256:7553fb2090d71822f02c629afe6042c299edf91ba1bf94951165613553984512 \
|
||||
--hash=sha256:7a66c7204d8869299919db4d5069a82f1561581af12b11b3c9f48c584eb8743d \
|
||||
--hash=sha256:7cc09976e8b56f8cebd752f7113ad07752461f48a58cbba644139015ac24954c \
|
||||
--hash=sha256:81afed14892743bbe14dacb9e36d9e0e504cd204e0b165062c488942b9718037 \
|
||||
--hash=sha256:8941aaadaf67246224cee8c3803777eed332a19d909b47e29c9842ef1e79ac26 \
|
||||
--hash=sha256:89472c9762729b5ae1ad974b777416bfda4ac5642423fa93bd57a09204712322 \
|
||||
--hash=sha256:8ea985900c5c95ce9db1745f7933eeef5d314f0565b27625d9a10ec9881e1bfb \
|
||||
--hash=sha256:8eca2a813c1cb7ad4fb74d368c2ffbbb4789d377ee5bb8df98373c2cc0dee76c \
|
||||
--hash=sha256:92b68146a71df78564e4ef48af17551a5ddd142e5190cdf2c5624d0c3ff5b2e8 \
|
||||
--hash=sha256:9332088d75dc3241c702d852d4671613136d90fa6881da7d770a483fd05248b4 \
|
||||
--hash=sha256:94698a9c5f91f9d138526b48fe26a199609544591f859c870d477351dc7b2414 \
|
||||
--hash=sha256:9a67fc9e8eb39039280526379fb3a70023d77caec1852002b4da7e8b270c4dd9 \
|
||||
--hash=sha256:9de40a7b0323d889cf8d23d1ef214f565ab154443c42737dfe52ff82cf857664 \
|
||||
--hash=sha256:a05d0c237b3349096d3981b727493e22147f934b20f6f125a3eba8f994bec4a9 \
|
||||
--hash=sha256:afb8db5439b81cf9c9d0c80404b60c3cc9c3add93e114dcae767f1477cb53775 \
|
||||
--hash=sha256:b18a3ed7d5b3bd8d9ef7a8cb226502c6bf8308df1525e1cc676c3680e7176739 \
|
||||
--hash=sha256:b1e74d11748e7e98e2f426ab176d4ed720a64412b6a15054378afdb71e0f37dc \
|
||||
--hash=sha256:b21e08af67b8a103c71a250401c78d5e0893beff75e28c53c98f4de42f774062 \
|
||||
--hash=sha256:b4c854ef3adc177950a8dfc81a86f5115d2abd545751a304c5bcf2c2c7283cfe \
|
||||
--hash=sha256:b882b3df248017dba09d6b16defe9b5c407fe32fc7c65a9c69798e6175601be9 \
|
||||
--hash=sha256:baf5215e0ab74c16e2dd324e8ec067ef59e41125d3eade2b863d294fd5035c92 \
|
||||
--hash=sha256:c649e3a33450ec82378822b3dad03cc228b8f5963c0c12fc3b1e0ab940f768a5 \
|
||||
--hash=sha256:c654de545946e0db659b3400168c9ad31b5d29593291482c43e3564effbcee13 \
|
||||
--hash=sha256:c6638687455baf640e37344fe26d37c404db8b80d037c3d29f58fe8d1c3b194d \
|
||||
--hash=sha256:c8d3b5532fc71b7a77c09192b4a5a200ea992702734a2e9279a37f2478236f26 \
|
||||
--hash=sha256:cb527a79772e5ef98fb1d700678fe031e353e765d1ca2d409c92263c6d43e09f \
|
||||
--hash=sha256:cf364028c016c03078a23b503f02058f1814320a56ad535686f90565636a9495 \
|
||||
--hash=sha256:d48a880098c96020b02d5a1f7d9251308510ce8858940e6fa99ece33f610838b \
|
||||
--hash=sha256:d68b6cef7827e8641e8ef16f4494edda8b36104d79773a334beaa1e3521430f6 \
|
||||
--hash=sha256:d9b29c1f0ae438d5ee9acb31cadee00a58c46cc9c0b2f9038c6b0b3470877a8c \
|
||||
--hash=sha256:d9b97165e8aed9272a6bb17c01e3cc5871a594a446ebedc996e2397a1c1ea8ef \
|
||||
--hash=sha256:da68248800ad6320861f129cd9c1bf96ca849a2771a59e0344e88681905916f5 \
|
||||
--hash=sha256:da902562c3e9c550df360bfa53c035b2f241fed6d9aef119048073680ace4a18 \
|
||||
--hash=sha256:dbd5c7a25a7cb98f5ca55d258b103a2054f859a46ae11aaf23134f9cc0d356ad \
|
||||
--hash=sha256:dd4f05f54a52fb558f1ba9f528228066954fee3ebe629fc1660d874d040ae5a3 \
|
||||
--hash=sha256:de8dad4425a6ca6e4e5e297b27b5c824ecc7581910bf9aee86cb6835e6812aa7 \
|
||||
--hash=sha256:e11e82b744887154b182fd3e7e8512418446501191994dbf9c9fc1f32cc8efd5 \
|
||||
--hash=sha256:e6e73b9e02893c764e7e8d5bb5ce277f1a009cd5243f8228f75f842bf937c534 \
|
||||
--hash=sha256:f73b96c41e3b2adedc34a7356e64c8eb96e03a3782b535e043a986276ce12a49 \
|
||||
--hash=sha256:f93fd8e5c8c0a4aa1f424d6173f14a892044054871c771f8566e4008eaa359d2 \
|
||||
--hash=sha256:fc33c5141b55ed366cfaad382df24fe7dcbc686de5be719b207bb248e3053dc5 \
|
||||
--hash=sha256:fc7de24befaeae77ba923797c7c87834c73648a05a4bde34b3b7e5588973a453 \
|
||||
--hash=sha256:fe562eb1a64e67dd297ccc4f5addea2501664954f2692b69a76449ec7913ecbf
|
||||
# via cairocffi
|
||||
chardet==5.2.0 \
|
||||
--hash=sha256:1b3b6ff479a8c414bc3fa2c0852995695c4a026dcd6d0633b2dd092ca39c1cf7 \
|
||||
|
|
@ -330,9 +347,9 @@ mkdocs-get-deps==0.2.0 \
|
|||
--hash=sha256:162b3d129c7fad9b19abfdcb9c1458a651628e4b1dea628ac68790fb3061c60c \
|
||||
--hash=sha256:2bf11d0b133e77a0dd036abeeb06dec8775e46efa526dc70667d8863eefc6134
|
||||
# via mkdocs
|
||||
mkdocs-material==9.6.18 \
|
||||
--hash=sha256:a2eb253bcc8b66f8c6eaf8379c10ed6e9644090c2e2e9d0971c7722dc7211c05 \
|
||||
--hash=sha256:dbc1e146a0ecce951a4d84f97b816a54936cdc9e1edd1667fc6868878ac06701
|
||||
mkdocs-material==9.6.20 \
|
||||
--hash=sha256:b8d8c8b0444c7c06dd984b55ba456ce731f0035c5a1533cc86793618eb1e6c82 \
|
||||
--hash=sha256:e1f84d21ec5fb730673c4259b2e0d39f8d32a3fef613e3a8e7094b012d43e790
|
||||
# via
|
||||
# -r requirements.in
|
||||
# mkdocs-print-site-plugin
|
||||
|
|
@ -366,87 +383,113 @@ pathvalidate==3.3.1 \
|
|||
--hash=sha256:5263baab691f8e1af96092fa5137ee17df5bdfbd6cff1fcac4d6ef4bc2e1735f \
|
||||
--hash=sha256:b18c07212bfead624345bb8e1d6141cdcf15a39736994ea0b94035ad2b1ba177
|
||||
# via pytablewriter
|
||||
pillow==10.4.0 \
|
||||
--hash=sha256:02a2be69f9c9b8c1e97cf2713e789d4e398c751ecfd9967c18d0ce304efbf885 \
|
||||
--hash=sha256:030abdbe43ee02e0de642aee345efa443740aa4d828bfe8e2eb11922ea6a21ea \
|
||||
--hash=sha256:06b2f7898047ae93fad74467ec3d28fe84f7831370e3c258afa533f81ef7f3df \
|
||||
--hash=sha256:0755ffd4a0c6f267cccbae2e9903d95477ca2f77c4fcf3a3a09570001856c8a5 \
|
||||
--hash=sha256:0a9ec697746f268507404647e531e92889890a087e03681a3606d9b920fbee3c \
|
||||
--hash=sha256:0ae24a547e8b711ccaaf99c9ae3cd975470e1a30caa80a6aaee9a2f19c05701d \
|
||||
--hash=sha256:134ace6dc392116566980ee7436477d844520a26a4b1bd4053f6f47d096997fd \
|
||||
--hash=sha256:166c1cd4d24309b30d61f79f4a9114b7b2313d7450912277855ff5dfd7cd4a06 \
|
||||
--hash=sha256:1b5dea9831a90e9d0721ec417a80d4cbd7022093ac38a568db2dd78363b00908 \
|
||||
--hash=sha256:1d846aea995ad352d4bdcc847535bd56e0fd88d36829d2c90be880ef1ee4668a \
|
||||
--hash=sha256:1ef61f5dd14c300786318482456481463b9d6b91ebe5ef12f405afbba77ed0be \
|
||||
--hash=sha256:297e388da6e248c98bc4a02e018966af0c5f92dfacf5a5ca22fa01cb3179bca0 \
|
||||
--hash=sha256:298478fe4f77a4408895605f3482b6cc6222c018b2ce565c2b6b9c354ac3229b \
|
||||
--hash=sha256:29dbdc4207642ea6aad70fbde1a9338753d33fb23ed6956e706936706f52dd80 \
|
||||
--hash=sha256:2db98790afc70118bd0255c2eeb465e9767ecf1f3c25f9a1abb8ffc8cfd1fe0a \
|
||||
--hash=sha256:32cda9e3d601a52baccb2856b8ea1fc213c90b340c542dcef77140dfa3278a9e \
|
||||
--hash=sha256:37fb69d905be665f68f28a8bba3c6d3223c8efe1edf14cc4cfa06c241f8c81d9 \
|
||||
--hash=sha256:416d3a5d0e8cfe4f27f574362435bc9bae57f679a7158e0096ad2beb427b8696 \
|
||||
--hash=sha256:43efea75eb06b95d1631cb784aa40156177bf9dd5b4b03ff38979e048258bc6b \
|
||||
--hash=sha256:4b35b21b819ac1dbd1233317adeecd63495f6babf21b7b2512d244ff6c6ce309 \
|
||||
--hash=sha256:4d9667937cfa347525b319ae34375c37b9ee6b525440f3ef48542fcf66f2731e \
|
||||
--hash=sha256:5161eef006d335e46895297f642341111945e2c1c899eb406882a6c61a4357ab \
|
||||
--hash=sha256:543f3dc61c18dafb755773efc89aae60d06b6596a63914107f75459cf984164d \
|
||||
--hash=sha256:551d3fd6e9dc15e4c1eb6fc4ba2b39c0c7933fa113b220057a34f4bb3268a060 \
|
||||
--hash=sha256:59291fb29317122398786c2d44427bbd1a6d7ff54017075b22be9d21aa59bd8d \
|
||||
--hash=sha256:5b001114dd152cfd6b23befeb28d7aee43553e2402c9f159807bf55f33af8a8d \
|
||||
--hash=sha256:5b4815f2e65b30f5fbae9dfffa8636d992d49705723fe86a3661806e069352d4 \
|
||||
--hash=sha256:5dc6761a6efc781e6a1544206f22c80c3af4c8cf461206d46a1e6006e4429ff3 \
|
||||
--hash=sha256:5e84b6cc6a4a3d76c153a6b19270b3526a5a8ed6b09501d3af891daa2a9de7d6 \
|
||||
--hash=sha256:6209bb41dc692ddfee4942517c19ee81b86c864b626dbfca272ec0f7cff5d9fb \
|
||||
--hash=sha256:673655af3eadf4df6b5457033f086e90299fdd7a47983a13827acf7459c15d94 \
|
||||
--hash=sha256:6c762a5b0997f5659a5ef2266abc1d8851ad7749ad9a6a5506eb23d314e4f46b \
|
||||
--hash=sha256:7086cc1d5eebb91ad24ded9f58bec6c688e9f0ed7eb3dbbf1e4800280a896496 \
|
||||
--hash=sha256:73664fe514b34c8f02452ffb73b7a92c6774e39a647087f83d67f010eb9a0cf0 \
|
||||
--hash=sha256:76a911dfe51a36041f2e756b00f96ed84677cdeb75d25c767f296c1c1eda1319 \
|
||||
--hash=sha256:780c072c2e11c9b2c7ca37f9a2ee8ba66f44367ac3e5c7832afcfe5104fd6d1b \
|
||||
--hash=sha256:7928ecbf1ece13956b95d9cbcfc77137652b02763ba384d9ab508099a2eca856 \
|
||||
--hash=sha256:7970285ab628a3779aecc35823296a7869f889b8329c16ad5a71e4901a3dc4ef \
|
||||
--hash=sha256:7a8d4bade9952ea9a77d0c3e49cbd8b2890a399422258a77f357b9cc9be8d680 \
|
||||
--hash=sha256:7c1ee6f42250df403c5f103cbd2768a28fe1a0ea1f0f03fe151c8741e1469c8b \
|
||||
--hash=sha256:7dfecdbad5c301d7b5bde160150b4db4c659cee2b69589705b6f8a0c509d9f42 \
|
||||
--hash=sha256:812f7342b0eee081eaec84d91423d1b4650bb9828eb53d8511bcef8ce5aecf1e \
|
||||
--hash=sha256:866b6942a92f56300012f5fbac71f2d610312ee65e22f1aa2609e491284e5597 \
|
||||
--hash=sha256:86dcb5a1eb778d8b25659d5e4341269e8590ad6b4e8b44d9f4b07f8d136c414a \
|
||||
--hash=sha256:87dd88ded2e6d74d31e1e0a99a726a6765cda32d00ba72dc37f0651f306daaa8 \
|
||||
--hash=sha256:8bc1a764ed8c957a2e9cacf97c8b2b053b70307cf2996aafd70e91a082e70df3 \
|
||||
--hash=sha256:8d4d5063501b6dd4024b8ac2f04962d661222d120381272deea52e3fc52d3736 \
|
||||
--hash=sha256:8f0aef4ef59694b12cadee839e2ba6afeab89c0f39a3adc02ed51d109117b8da \
|
||||
--hash=sha256:930044bb7679ab003b14023138b50181899da3f25de50e9dbee23b61b4de2126 \
|
||||
--hash=sha256:950be4d8ba92aca4b2bb0741285a46bfae3ca699ef913ec8416c1b78eadd64cd \
|
||||
--hash=sha256:961a7293b2457b405967af9c77dcaa43cc1a8cd50d23c532e62d48ab6cdd56f5 \
|
||||
--hash=sha256:9b885f89040bb8c4a1573566bbb2f44f5c505ef6e74cec7ab9068c900047f04b \
|
||||
--hash=sha256:9f4727572e2918acaa9077c919cbbeb73bd2b3ebcfe033b72f858fc9fbef0026 \
|
||||
--hash=sha256:a02364621fe369e06200d4a16558e056fe2805d3468350df3aef21e00d26214b \
|
||||
--hash=sha256:a985e028fc183bf12a77a8bbf36318db4238a3ded7fa9df1b9a133f1cb79f8fc \
|
||||
--hash=sha256:ac1452d2fbe4978c2eec89fb5a23b8387aba707ac72810d9490118817d9c0b46 \
|
||||
--hash=sha256:b15e02e9bb4c21e39876698abf233c8c579127986f8207200bc8a8f6bb27acf2 \
|
||||
--hash=sha256:b2724fdb354a868ddf9a880cb84d102da914e99119211ef7ecbdc613b8c96b3c \
|
||||
--hash=sha256:bbc527b519bd3aa9d7f429d152fea69f9ad37c95f0b02aebddff592688998abe \
|
||||
--hash=sha256:bcd5e41a859bf2e84fdc42f4edb7d9aba0a13d29a2abadccafad99de3feff984 \
|
||||
--hash=sha256:bd2880a07482090a3bcb01f4265f1936a903d70bc740bfcb1fd4e8a2ffe5cf5a \
|
||||
--hash=sha256:bee197b30783295d2eb680b311af15a20a8b24024a19c3a26431ff83eb8d1f70 \
|
||||
--hash=sha256:bf2342ac639c4cf38799a44950bbc2dfcb685f052b9e262f446482afaf4bffca \
|
||||
--hash=sha256:c76e5786951e72ed3686e122d14c5d7012f16c8303a674d18cdcd6d89557fc5b \
|
||||
--hash=sha256:cbed61494057c0f83b83eb3a310f0bf774b09513307c434d4366ed64f4128a91 \
|
||||
--hash=sha256:cfdd747216947628af7b259d274771d84db2268ca062dd5faf373639d00113a3 \
|
||||
--hash=sha256:d7480af14364494365e89d6fddc510a13e5a2c3584cb19ef65415ca57252fb84 \
|
||||
--hash=sha256:dbc6ae66518ab3c5847659e9988c3b60dc94ffb48ef9168656e0019a93dbf8a1 \
|
||||
--hash=sha256:dc3e2db6ba09ffd7d02ae9141cfa0ae23393ee7687248d46a7507b75d610f4f5 \
|
||||
--hash=sha256:dfe91cb65544a1321e631e696759491ae04a2ea11d36715eca01ce07284738be \
|
||||
--hash=sha256:e4d49b85c4348ea0b31ea63bc75a9f3857869174e2bf17e7aba02945cd218e6f \
|
||||
--hash=sha256:e4db64794ccdf6cb83a59d73405f63adbe2a1887012e308828596100a0b2f6cc \
|
||||
--hash=sha256:e553cad5179a66ba15bb18b353a19020e73a7921296a7979c4a2b7f6a5cd57f9 \
|
||||
--hash=sha256:e88d5e6ad0d026fba7bdab8c3f225a69f063f116462c49892b0149e21b6c0a0e \
|
||||
--hash=sha256:ecd85a8d3e79cd7158dec1c9e5808e821feea088e2f69a974db5edf84dc53141 \
|
||||
--hash=sha256:f5b92f4d70791b4a67157321c4e8225d60b119c5cc9aee8ecf153aace4aad4ef \
|
||||
--hash=sha256:f5f0c3e969c8f12dd2bb7e0b15d5c468b51e5017e01e2e867335c81903046a22 \
|
||||
--hash=sha256:f7baece4ce06bade126fb84b8af1c33439a76d8a6fd818970215e0560ca28c27 \
|
||||
--hash=sha256:ff25afb18123cea58a591ea0244b92eb1e61a1fd497bf6d6384f09bc3262ec3e \
|
||||
--hash=sha256:ff337c552345e95702c5fde3158acb0625111017d0e5f24bf3acdb9cc16b90d1
|
||||
pillow==11.3.0 \
|
||||
--hash=sha256:023f6d2d11784a465f09fd09a34b150ea4672e85fb3d05931d89f373ab14abb2 \
|
||||
--hash=sha256:02a723e6bf909e7cea0dac1b0e0310be9d7650cd66222a5f1c571455c0a45214 \
|
||||
--hash=sha256:040a5b691b0713e1f6cbe222e0f4f74cd233421e105850ae3b3c0ceda520f42e \
|
||||
--hash=sha256:05f6ecbeff5005399bb48d198f098a9b4b6bdf27b8487c7f38ca16eeb070cd59 \
|
||||
--hash=sha256:068d9c39a2d1b358eb9f245ce7ab1b5c3246c7c8c7d9ba58cfa5b43146c06e50 \
|
||||
--hash=sha256:0743841cabd3dba6a83f38a92672cccbd69af56e3e91777b0ee7f4dba4385632 \
|
||||
--hash=sha256:092c80c76635f5ecb10f3f83d76716165c96f5229addbd1ec2bdbbda7d496e06 \
|
||||
--hash=sha256:0b275ff9b04df7b640c59ec5a3cb113eefd3795a8df80bac69646ef699c6981a \
|
||||
--hash=sha256:0bce5c4fd0921f99d2e858dc4d4d64193407e1b99478bc5cacecba2311abde51 \
|
||||
--hash=sha256:1019b04af07fc0163e2810167918cb5add8d74674b6267616021ab558dc98ced \
|
||||
--hash=sha256:106064daa23a745510dabce1d84f29137a37224831d88eb4ce94bb187b1d7e5f \
|
||||
--hash=sha256:118ca10c0d60b06d006be10a501fd6bbdfef559251ed31b794668ed569c87e12 \
|
||||
--hash=sha256:13f87d581e71d9189ab21fe0efb5a23e9f28552d5be6979e84001d3b8505abe8 \
|
||||
--hash=sha256:155658efb5e044669c08896c0c44231c5e9abcaadbc5cd3648df2f7c0b96b9a6 \
|
||||
--hash=sha256:1904e1264881f682f02b7f8167935cce37bc97db457f8e7849dc3a6a52b99580 \
|
||||
--hash=sha256:19d2ff547c75b8e3ff46f4d9ef969a06c30ab2d4263a9e287733aa8b2429ce8f \
|
||||
--hash=sha256:1a992e86b0dd7aeb1f053cd506508c0999d710a8f07b4c791c63843fc6a807ac \
|
||||
--hash=sha256:1b9c17fd4ace828b3003dfd1e30bff24863e0eb59b535e8f80194d9cc7ecf860 \
|
||||
--hash=sha256:1c627742b539bba4309df89171356fcb3cc5a9178355b2727d1b74a6cf155fbd \
|
||||
--hash=sha256:1cd110edf822773368b396281a2293aeb91c90a2db00d78ea43e7e861631b722 \
|
||||
--hash=sha256:1f85acb69adf2aaee8b7da124efebbdb959a104db34d3a2cb0f3793dbae422a8 \
|
||||
--hash=sha256:23cff760a9049c502721bdb743a7cb3e03365fafcdfc2ef9784610714166e5a4 \
|
||||
--hash=sha256:2465a69cf967b8b49ee1b96d76718cd98c4e925414ead59fdf75cf0fd07df673 \
|
||||
--hash=sha256:2a3117c06b8fb646639dce83694f2f9eac405472713fcb1ae887469c0d4f6788 \
|
||||
--hash=sha256:2aceea54f957dd4448264f9bf40875da0415c83eb85f55069d89c0ed436e3542 \
|
||||
--hash=sha256:2d6fcc902a24ac74495df63faad1884282239265c6839a0a6416d33faedfae7e \
|
||||
--hash=sha256:30807c931ff7c095620fe04448e2c2fc673fcbb1ffe2a7da3fb39613489b1ddd \
|
||||
--hash=sha256:30b7c02f3899d10f13d7a48163c8969e4e653f8b43416d23d13d1bbfdc93b9f8 \
|
||||
--hash=sha256:3828ee7586cd0b2091b6209e5ad53e20d0649bbe87164a459d0676e035e8f523 \
|
||||
--hash=sha256:3cee80663f29e3843b68199b9d6f4f54bd1d4a6b59bdd91bceefc51238bcb967 \
|
||||
--hash=sha256:3e184b2f26ff146363dd07bde8b711833d7b0202e27d13540bfe2e35a323a809 \
|
||||
--hash=sha256:41342b64afeba938edb034d122b2dda5db2139b9a4af999729ba8818e0056477 \
|
||||
--hash=sha256:41742638139424703b4d01665b807c6468e23e699e8e90cffefe291c5832b027 \
|
||||
--hash=sha256:4445fa62e15936a028672fd48c4c11a66d641d2c05726c7ec1f8ba6a572036ae \
|
||||
--hash=sha256:45dfc51ac5975b938e9809451c51734124e73b04d0f0ac621649821a63852e7b \
|
||||
--hash=sha256:465b9e8844e3c3519a983d58b80be3f668e2a7a5db97f2784e7079fbc9f9822c \
|
||||
--hash=sha256:48d254f8a4c776de343051023eb61ffe818299eeac478da55227d96e241de53f \
|
||||
--hash=sha256:4c834a3921375c48ee6b9624061076bc0a32a60b5532b322cc0ea64e639dd50e \
|
||||
--hash=sha256:4c96f993ab8c98460cd0c001447bff6194403e8b1d7e149ade5f00594918128b \
|
||||
--hash=sha256:504b6f59505f08ae014f724b6207ff6222662aab5cc9542577fb084ed0676ac7 \
|
||||
--hash=sha256:527b37216b6ac3a12d7838dc3bd75208ec57c1c6d11ef01902266a5a0c14fc27 \
|
||||
--hash=sha256:5418b53c0d59b3824d05e029669efa023bbef0f3e92e75ec8428f3799487f361 \
|
||||
--hash=sha256:59a03cdf019efbfeeed910bf79c7c93255c3d54bc45898ac2a4140071b02b4ae \
|
||||
--hash=sha256:5e05688ccef30ea69b9317a9ead994b93975104a677a36a8ed8106be9260aa6d \
|
||||
--hash=sha256:6359a3bc43f57d5b375d1ad54a0074318a0844d11b76abccf478c37c986d3cfc \
|
||||
--hash=sha256:643f189248837533073c405ec2f0bb250ba54598cf80e8c1e043381a60632f58 \
|
||||
--hash=sha256:65dc69160114cdd0ca0f35cb434633c75e8e7fad4cf855177a05bf38678f73ad \
|
||||
--hash=sha256:67172f2944ebba3d4a7b54f2e95c786a3a50c21b88456329314caaa28cda70f6 \
|
||||
--hash=sha256:676b2815362456b5b3216b4fd5bd89d362100dc6f4945154ff172e206a22c024 \
|
||||
--hash=sha256:6a418691000f2a418c9135a7cf0d797c1bb7d9a485e61fe8e7722845b95ef978 \
|
||||
--hash=sha256:6abdbfd3aea42be05702a8dd98832329c167ee84400a1d1f61ab11437f1717eb \
|
||||
--hash=sha256:6be31e3fc9a621e071bc17bb7de63b85cbe0bfae91bb0363c893cbe67247780d \
|
||||
--hash=sha256:7107195ddc914f656c7fc8e4a5e1c25f32e9236ea3ea860f257b0436011fddd0 \
|
||||
--hash=sha256:71f511f6b3b91dd543282477be45a033e4845a40278fa8dcdbfdb07109bf18f9 \
|
||||
--hash=sha256:7859a4cc7c9295f5838015d8cc0a9c215b77e43d07a25e460f35cf516df8626f \
|
||||
--hash=sha256:7966e38dcd0fa11ca390aed7c6f20454443581d758242023cf36fcb319b1a874 \
|
||||
--hash=sha256:79ea0d14d3ebad43ec77ad5272e6ff9bba5b679ef73375ea760261207fa8e0aa \
|
||||
--hash=sha256:7aee118e30a4cf54fdd873bd3a29de51e29105ab11f9aad8c32123f58c8f8081 \
|
||||
--hash=sha256:7b161756381f0918e05e7cb8a371fff367e807770f8fe92ecb20d905d0e1c149 \
|
||||
--hash=sha256:7c8ec7a017ad1bd562f93dbd8505763e688d388cde6e4a010ae1486916e713e6 \
|
||||
--hash=sha256:7d1aa4de119a0ecac0a34a9c8bde33f34022e2e8f99104e47a3ca392fd60e37d \
|
||||
--hash=sha256:7db51d222548ccfd274e4572fdbf3e810a5e66b00608862f947b163e613b67dd \
|
||||
--hash=sha256:819931d25e57b513242859ce1876c58c59dc31587847bf74cfe06b2e0cb22d2f \
|
||||
--hash=sha256:83e1b0161c9d148125083a35c1c5a89db5b7054834fd4387499e06552035236c \
|
||||
--hash=sha256:857844335c95bea93fb39e0fa2726b4d9d758850b34075a7e3ff4f4fa3aa3b31 \
|
||||
--hash=sha256:8797edc41f3e8536ae4b10897ee2f637235c94f27404cac7297f7b607dd0716e \
|
||||
--hash=sha256:8924748b688aa210d79883357d102cd64690e56b923a186f35a82cbc10f997db \
|
||||
--hash=sha256:89bd777bc6624fe4115e9fac3352c79ed60f3bb18651420635f26e643e3dd1f6 \
|
||||
--hash=sha256:8dc70ca24c110503e16918a658b869019126ecfe03109b754c402daff12b3d9f \
|
||||
--hash=sha256:91da1d88226663594e3f6b4b8c3c8d85bd504117d043740a8e0ec449087cc494 \
|
||||
--hash=sha256:921bd305b10e82b4d1f5e802b6850677f965d8394203d182f078873851dada69 \
|
||||
--hash=sha256:932c754c2d51ad2b2271fd01c3d121daaa35e27efae2a616f77bf164bc0b3e94 \
|
||||
--hash=sha256:93efb0b4de7e340d99057415c749175e24c8864302369e05914682ba642e5d77 \
|
||||
--hash=sha256:97afb3a00b65cc0804d1c7abddbf090a81eaac02768af58cbdcaaa0a931e0b6d \
|
||||
--hash=sha256:97f07ed9f56a3b9b5f49d3661dc9607484e85c67e27f3e8be2c7d28ca032fec7 \
|
||||
--hash=sha256:98a9afa7b9007c67ed84c57c9e0ad86a6000da96eaa638e4f8abe5b65ff83f0a \
|
||||
--hash=sha256:9ab6ae226de48019caa8074894544af5b53a117ccb9d3b3dcb2871464c829438 \
|
||||
--hash=sha256:9c412fddd1b77a75aa904615ebaa6001f169b26fd467b4be93aded278266b288 \
|
||||
--hash=sha256:a1bc6ba083b145187f648b667e05a2534ecc4b9f2784c2cbe3089e44868f2b9b \
|
||||
--hash=sha256:a418486160228f64dd9e9efcd132679b7a02a5f22c982c78b6fc7dab3fefb635 \
|
||||
--hash=sha256:a4d336baed65d50d37b88ca5b60c0fa9d81e3a87d4a7930d3880d1624d5b31f3 \
|
||||
--hash=sha256:a6444696fce635783440b7f7a9fc24b3ad10a9ea3f0ab66c5905be1c19ccf17d \
|
||||
--hash=sha256:a7bc6e6fd0395bc052f16b1a8670859964dbd7003bd0af2ff08342eb6e442cfe \
|
||||
--hash=sha256:b4b8f3efc8d530a1544e5962bd6b403d5f7fe8b9e08227c6b255f98ad82b4ba0 \
|
||||
--hash=sha256:b5f56c3f344f2ccaf0dd875d3e180f631dc60a51b314295a3e681fe8cf851fbe \
|
||||
--hash=sha256:be5463ac478b623b9dd3937afd7fb7ab3d79dd290a28e2b6df292dc75063eb8a \
|
||||
--hash=sha256:c37d8ba9411d6003bba9e518db0db0c58a680ab9fe5179f040b0463644bc9805 \
|
||||
--hash=sha256:c84d689db21a1c397d001aa08241044aa2069e7587b398c8cc63020390b1c1b8 \
|
||||
--hash=sha256:c96d333dcf42d01f47b37e0979b6bd73ec91eae18614864622d9b87bbd5bbf36 \
|
||||
--hash=sha256:cadc9e0ea0a2431124cde7e1697106471fc4c1da01530e679b2391c37d3fbb3a \
|
||||
--hash=sha256:cc3e831b563b3114baac7ec2ee86819eb03caa1a2cef0b481a5675b59c4fe23b \
|
||||
--hash=sha256:cd8ff254faf15591e724dc7c4ddb6bf4793efcbe13802a4ae3e863cd300b493e \
|
||||
--hash=sha256:d000f46e2917c705e9fb93a3606ee4a819d1e3aa7a9b442f6444f07e77cf5e25 \
|
||||
--hash=sha256:d9da3df5f9ea2a89b81bb6087177fb1f4d1c7146d583a3fe5c672c0d94e55e12 \
|
||||
--hash=sha256:e5c5858ad8ec655450a7c7df532e9842cf8df7cc349df7225c60d5d348c8aada \
|
||||
--hash=sha256:e67d793d180c9df62f1f40aee3accca4829d3794c95098887edc18af4b8b780c \
|
||||
--hash=sha256:ea944117a7974ae78059fcc1800e5d3295172bb97035c0c1d9345fca1419da71 \
|
||||
--hash=sha256:eb76541cba2f958032d79d143b98a3a6b3ea87f0959bbe256c0b5e416599fd5d \
|
||||
--hash=sha256:ec1ee50470b0d050984394423d96325b744d55c701a439d2bd66089bff963d3c \
|
||||
--hash=sha256:ee92f2fd10f4adc4b43d07ec5e779932b4eb3dbfbc34790ada5a6669bc095aa6 \
|
||||
--hash=sha256:f0f5d8f4a08090c6d6d578351a2b91acf519a54986c055af27e7a93feae6d3f1 \
|
||||
--hash=sha256:f1f182ebd2303acf8c380a54f615ec883322593320a9b00438eb842c1f37ae50 \
|
||||
--hash=sha256:f8a5827f84d973d8636e9dc5764af4f0cf2318d26744b3d902931701b0d46653 \
|
||||
--hash=sha256:f944255db153ebb2b19c51fe85dd99ef0ce494123f21b9db4877ffdfc5590c7c \
|
||||
--hash=sha256:fdae223722da47b024b867c1ea0be64e0df702c5e0a60e27daad39bf960dd1e4 \
|
||||
--hash=sha256:fe27fb049cdcca11f11a7bfda64043c37b30e6b91f10cb5bab275806c32f6ab3
|
||||
# via
|
||||
# cairosvg
|
||||
# mkdocs-material
|
||||
|
|
@ -454,9 +497,9 @@ platformdirs==4.4.0 \
|
|||
--hash=sha256:abd01743f24e5287cd7a5db3752faf1a2d65353f38ec26d98e25a6db65958c85 \
|
||||
--hash=sha256:ca753cf4d81dc309bc67b0ea38fd15dc97bc30ce419a7f58d13eb3bf14c4febf
|
||||
# via mkdocs-get-deps
|
||||
pycparser==2.22 \
|
||||
--hash=sha256:491c8be9c040f5390f5bf44a5b07752bd07f56edf992381b05c701439eec10f6 \
|
||||
--hash=sha256:c3702b6d3dd8c7abc1afa565d7e63d53a1d0bd86cdc24edd75470f4de499cfcc
|
||||
pycparser==2.23 \
|
||||
--hash=sha256:78816d4f24add8f10a06d6f05b4d424ad9e96cfebf68a4ddc99c65c0720d00c2 \
|
||||
--hash=sha256:e5c6e8d3fbad53479cab09ac03729e0a9faf2bee3db8208a550daf5af81a5934
|
||||
# via cffi
|
||||
pygments==2.19.2 \
|
||||
--hash=sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887 \
|
||||
|
|
@ -466,9 +509,9 @@ pymdown-extensions==10.16.1 \
|
|||
--hash=sha256:aace82bcccba3efc03e25d584e6a22d27a8e17caa3f4dd9f207e49b787aa9a91 \
|
||||
--hash=sha256:d6ba157a6c03146a7fb122b2b9a121300056384eafeec9c9f9e584adfdb2a32d
|
||||
# via mkdocs-material
|
||||
pyparsing==3.2.3 \
|
||||
--hash=sha256:a749938e02d6fd0b59b356ca504a24982314bb090c383e3cf201c95ef7e2bfcf \
|
||||
--hash=sha256:b9c13f1ab8b3b542f72e28f634bad4de758ab3ce4546e4301970ad6fa77c38be
|
||||
pyparsing==3.2.4 \
|
||||
--hash=sha256:91d0fcde680d42cd031daf3a6ba20da3107e08a75de50da58360e7d94ab24d36 \
|
||||
--hash=sha256:fff89494f45559d0f2ce46613b419f632bbb6afbdaed49696d322bcf98a58e99
|
||||
# via mike
|
||||
pytablewriter==1.2.1 \
|
||||
--hash=sha256:7bd0f4f397e070e3b8a34edcf1b9257ccbb18305493d8350a5dbc9957fced959 \
|
||||
|
|
|
|||
|
|
@ -531,8 +531,8 @@ The web UI can be deployed and configured without going through the setup wizard
|
|||
- `ADMIN_PASSWORD`: password to access the web UI.
|
||||
- `FLASK_SECRET`: a secret key used to encrypt the session cookie (if not set, a random key will be generated).
|
||||
- `TOTP_ENCRYPTION_KEYS` (or `TOTP_SECRETS`): a list of TOTP encryption keys separated by spaces or a dictionary (e.g.: `{"1": "mysecretkey"}` or `mysecretkey` or `mysecretkey mysecretkey1`). **We strongly recommend you to set this variable if you want to use 2FA, as it will be used to encrypt the TOTP secret keys** (if not set, a random number of secret keys will be generated). Check out the [passlib documentation](https://passlib.readthedocs.io/en/stable/narr/totp-tutorial.html#application-secrets) for more information.
|
||||
- `LISTEN_ADDR`: the address where the web UI will listen (default is `0.0.0.0` in **Docker images** and `127.0.0.1` on **Linux installations**).
|
||||
- `LISTEN_PORT`: the port where the web UI will listen (default is `7000`).
|
||||
- `UI_LISTEN_ADDR` (preferred): the address where the web UI will listen (default is `0.0.0.0` in **Docker images** and `127.0.0.1` on **Linux installations**). Falls back to `LISTEN_ADDR` if not set.
|
||||
- `UI_LISTEN_PORT` (preferred): the port where the web UI will listen (default is `7000`). Falls back to `LISTEN_PORT` if not set.
|
||||
- `MAX_WORKERS`: the number of workers used by the web UI (default is the number of CPUs).
|
||||
- `MAX_THREADS`: the number of threads used by the web UI (default is `MAX_WORKERS` * 2).
|
||||
- `FORWARDED_ALLOW_IPS`: a list of IP addresses or networks that are allowed to be used in the `X-Forwarded-For` header (default is `*` in **Docker images** and `127.0.0.1` on **Linux installations**).
|
||||
|
|
|
|||
99
misc/dev/docker-compose.api.misc.yml
Normal file
99
misc/dev/docker-compose.api.misc.yml
Normal file
|
|
@ -0,0 +1,99 @@
|
|||
services:
|
||||
bunkerweb:
|
||||
build:
|
||||
context: ../..
|
||||
dockerfile: ./src/bw/Dockerfile
|
||||
args:
|
||||
SKIP_MINIFY_HTML: "yes"
|
||||
ports:
|
||||
- 80:8080/tcp
|
||||
- 443:8443/tcp
|
||||
- 443:8443/udp
|
||||
environment:
|
||||
API_WHITELIST_IP: "127.0.0.0/24 10.20.30.0/24"
|
||||
restart: "unless-stopped"
|
||||
networks:
|
||||
bw-universe:
|
||||
aliases:
|
||||
- bunkerweb
|
||||
bw-services:
|
||||
aliases:
|
||||
- bunkerweb
|
||||
|
||||
bw-scheduler:
|
||||
build:
|
||||
context: ../..
|
||||
dockerfile: ./src/scheduler/Dockerfile
|
||||
depends_on:
|
||||
- bunkerweb
|
||||
volumes:
|
||||
- bw-storage:/data
|
||||
- ./configs/server-http/hello.conf:/data/configs/server-http/hello.conf:ro
|
||||
environment:
|
||||
BUNKERWEB_INSTANCES: "bunkerweb"
|
||||
SERVER_NAME: "app1.example.com"
|
||||
LOG_LEVEL: "info"
|
||||
CUSTOM_LOG_LEVEL: "debug"
|
||||
MULTISITE: "yes"
|
||||
API_WHITELIST_IP: "127.0.0.0/24 10.20.30.0/24"
|
||||
USE_BUNKERNET: "no"
|
||||
USE_BLACKLIST: "no"
|
||||
USE_WHITELIST: "no"
|
||||
SEND_ANONYMOUS_REPORT: "no"
|
||||
SERVE_FILES: "no"
|
||||
DISABLE_DEFAULT_SERVER: "yes"
|
||||
USE_CLIENT_CACHE: "yes"
|
||||
USE_GZIP: "yes"
|
||||
SESSIONS_CHECK_IP: "no"
|
||||
EXTERNAL_PLUGIN_URLS: "https://github.com/bunkerity/bunkerweb-plugins/archive/refs/heads/dev.zip"
|
||||
CUSTOM_CONF_MODSEC_CRS_reqbody-suppress: "SecRuleRemoveById 200002"
|
||||
app1.example.com_USE_REVERSE_PROXY: "yes"
|
||||
app1.example.com_REVERSE_PROXY_URL: "/"
|
||||
app1.example.com_REVERSE_PROXY_HOST: "http://app1:8080"
|
||||
restart: "unless-stopped"
|
||||
networks:
|
||||
bw-universe:
|
||||
aliases:
|
||||
- bw-scheduler
|
||||
|
||||
bw-api:
|
||||
build:
|
||||
context: ../..
|
||||
dockerfile: ./src/api/Dockerfile
|
||||
ports:
|
||||
- 8888:8888
|
||||
volumes:
|
||||
- bw-storage:/data
|
||||
- ../../src/api/app:/usr/share/bunkerweb/api/app:ro
|
||||
- ../../src/api/utils:/usr/share/bunkerweb/api/utils:ro
|
||||
environment:
|
||||
API_USERNAME: "admin"
|
||||
API_PASSWORD: "P@ssw0rd"
|
||||
CUSTOM_LOG_LEVEL: "debug"
|
||||
DEBUG: "1"
|
||||
restart: "unless-stopped"
|
||||
networks:
|
||||
bw-universe:
|
||||
aliases:
|
||||
- bw-api
|
||||
|
||||
app1:
|
||||
image: nginxdemos/nginx-hello
|
||||
restart: "unless-stopped"
|
||||
networks:
|
||||
bw-services:
|
||||
aliases:
|
||||
- app1
|
||||
|
||||
volumes:
|
||||
bw-storage:
|
||||
|
||||
networks:
|
||||
bw-universe:
|
||||
name: bw-universe
|
||||
ipam:
|
||||
driver: default
|
||||
config:
|
||||
- subnet: 10.20.30.0/24
|
||||
bw-services:
|
||||
name: bw-services
|
||||
95
misc/dev/docker-compose.api.yml
Normal file
95
misc/dev/docker-compose.api.yml
Normal file
|
|
@ -0,0 +1,95 @@
|
|||
services:
|
||||
bunkerweb:
|
||||
build:
|
||||
context: ../..
|
||||
dockerfile: ./src/bw/Dockerfile
|
||||
args:
|
||||
SKIP_MINIFY_HTML: "yes"
|
||||
ports:
|
||||
- 80:8080/tcp
|
||||
- 443:8443/tcp
|
||||
- 443:8443/udp
|
||||
environment:
|
||||
API_WHITELIST_IP: "127.0.0.0/24 10.20.30.0/24"
|
||||
restart: "unless-stopped"
|
||||
networks:
|
||||
bw-universe:
|
||||
aliases:
|
||||
- bunkerweb
|
||||
bw-services:
|
||||
aliases:
|
||||
- bunkerweb
|
||||
|
||||
bw-scheduler:
|
||||
build:
|
||||
context: ../..
|
||||
dockerfile: ./src/scheduler/Dockerfile
|
||||
depends_on:
|
||||
- bunkerweb
|
||||
volumes:
|
||||
- bw-storage:/data
|
||||
environment:
|
||||
BUNKERWEB_INSTANCES: "bunkerweb"
|
||||
SERVER_NAME: "app1.example.com"
|
||||
MULTISITE: "yes"
|
||||
API_WHITELIST_IP: "127.0.0.0/24 10.20.30.0/24"
|
||||
USE_BUNKERNET: "no"
|
||||
USE_BLACKLIST: "no"
|
||||
USE_WHITELIST: "no"
|
||||
SEND_ANONYMOUS_REPORT: "no"
|
||||
CUSTOM_LOG_LEVEL: "debug"
|
||||
LOG_LEVEL: "info"
|
||||
SERVE_FILES: "no"
|
||||
DISABLE_DEFAULT_SERVER: "yes"
|
||||
USE_CLIENT_CACHE: "yes"
|
||||
USE_GZIP: "yes"
|
||||
USE_REVERSE_PROXY: "yes"
|
||||
REVERSE_PROXY_URL: "/"
|
||||
REVERSE_PROXY_HOST: "http://app1:8080"
|
||||
restart: "unless-stopped"
|
||||
networks:
|
||||
bw-universe:
|
||||
aliases:
|
||||
- bw-scheduler
|
||||
|
||||
bw-api:
|
||||
build:
|
||||
context: ../..
|
||||
dockerfile: ./src/api/Dockerfile
|
||||
ports:
|
||||
- 8888:8888
|
||||
volumes:
|
||||
- bw-storage:/data
|
||||
- ../../src/api/app:/usr/share/bunkerweb/api/app:ro
|
||||
- ../../src/api/utils:/usr/share/bunkerweb/api/utils:ro
|
||||
environment:
|
||||
API_USERNAME: "admin"
|
||||
API_PASSWORD: "P@ssw0rd"
|
||||
CUSTOM_LOG_LEVEL: "debug"
|
||||
DEBUG: "1"
|
||||
restart: "unless-stopped"
|
||||
networks:
|
||||
bw-universe:
|
||||
aliases:
|
||||
- bw-api
|
||||
|
||||
app1:
|
||||
image: nginxdemos/nginx-hello
|
||||
restart: "unless-stopped"
|
||||
networks:
|
||||
bw-services:
|
||||
aliases:
|
||||
- app1
|
||||
|
||||
volumes:
|
||||
bw-storage:
|
||||
|
||||
networks:
|
||||
bw-universe:
|
||||
name: bw-universe
|
||||
ipam:
|
||||
driver: default
|
||||
config:
|
||||
- subnet: 10.20.30.0/24
|
||||
bw-services:
|
||||
name: bw-services
|
||||
|
|
@ -213,6 +213,27 @@ ask_user_preferences() {
|
|||
CROWDSEC_INSTALL="no"
|
||||
fi
|
||||
|
||||
# Ask about API service enablement
|
||||
if [ -z "$SERVICE_API" ]; then
|
||||
echo
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo -e "${BLUE}🧩 BunkerWeb API Service${NC}"
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo "The BunkerWeb API provides a programmatic interface (FastAPI) to manage instances,"
|
||||
echo "perform actions (reload/stop), and integrate with external systems."
|
||||
echo "It is optional and disabled by default on Linux installations."
|
||||
echo
|
||||
while true; do
|
||||
echo -e "${YELLOW}Enable the API service? (y/N):${NC} "
|
||||
read -p "" -r
|
||||
case $REPLY in
|
||||
[Yy]*) SERVICE_API=yes; break ;;
|
||||
""|[Nn]*) SERVICE_API=no; break ;;
|
||||
*) echo "Please answer yes (y) or no (n)." ;;
|
||||
esac
|
||||
done
|
||||
fi
|
||||
|
||||
# Ask about AppSec installation if CrowdSec is chosen
|
||||
if [ "$CROWDSEC_INSTALL" = "yes" ]; then
|
||||
echo
|
||||
|
|
@ -619,6 +640,10 @@ show_final_info() {
|
|||
echo "Services status:"
|
||||
systemctl status bunkerweb --no-pager -l || true
|
||||
systemctl status bunkerweb-scheduler --no-pager -l || true
|
||||
# Show API service status if present on this system
|
||||
if systemctl list-units --type=service --all | grep -q '^bunkerweb-api.service'; then
|
||||
systemctl status bunkerweb-api --no-pager -l || true
|
||||
fi
|
||||
|
||||
if [ "$ENABLE_WIZARD" = "yes" ]; then
|
||||
systemctl status bunkerweb-ui --no-pager -l || true
|
||||
|
|
@ -633,6 +658,9 @@ show_final_info() {
|
|||
fi
|
||||
|
||||
echo " - Scheduler config: /etc/bunkerweb/scheduler.env"
|
||||
if [ "${SERVICE_API:-no}" = "yes" ] || systemctl list-units --type=service --all | grep -q '^bunkerweb-api.service'; then
|
||||
echo " - API config: /etc/bunkerweb/api.env"
|
||||
fi
|
||||
echo " - Logs: /var/log/bunkerweb/"
|
||||
echo
|
||||
|
||||
|
|
@ -685,6 +713,8 @@ usage() {
|
|||
echo " -w, --enable-wizard Enable the setup wizard (default in interactive mode)"
|
||||
echo " -n, --no-wizard Disable the setup wizard"
|
||||
echo " -y, --yes Non-interactive mode, use defaults"
|
||||
echo " --api, --enable-api Enable the API service (disabled by default on Linux)"
|
||||
echo " --no-api Explicitly disable the API service"
|
||||
echo " -f, --force Force installation on unsupported OS versions"
|
||||
echo " -q, --quiet Silent installation (suppress output)"
|
||||
echo " -h, --help Show this help message"
|
||||
|
|
@ -793,6 +823,14 @@ while [[ $# -gt 0 ]]; do
|
|||
BUNKERWEB_INSTANCES_INPUT="$2"
|
||||
shift 2
|
||||
;;
|
||||
--api|--enable-api)
|
||||
SERVICE_API=yes
|
||||
shift
|
||||
;;
|
||||
--no-api)
|
||||
SERVICE_API=no
|
||||
shift
|
||||
;;
|
||||
--backup-dir)
|
||||
BACKUP_DIRECTORY="$2"; shift 2 ;;
|
||||
--no-auto-backup)
|
||||
|
|
@ -1019,6 +1057,11 @@ main() {
|
|||
echo "BUNKERWEB_INSTANCES=$BUNKERWEB_INSTANCES_INPUT" > /var/tmp/bunkerweb_instances.env
|
||||
fi
|
||||
|
||||
# Persist API enablement for postinstall if chosen
|
||||
if [ "${SERVICE_API:-no}" = "yes" ]; then
|
||||
touch /var/tmp/bunkerweb_enable_api
|
||||
fi
|
||||
|
||||
# Set environment variables based on installation type
|
||||
case "$INSTALL_TYPE" in
|
||||
"manager")
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
FROM python:3.10-alpine@sha256:8d21601f9f531162bc0c37ae0ac9e7a070e512a6ae0ffc4090118866902c9caa
|
||||
FROM python:3.10-alpine@sha256:24cab748bf7bd8e3d2f9bb4e5771f17b628417527a4e1f2c59c370c2a8a27f1c
|
||||
|
||||
# Install python dependencies
|
||||
RUN apk add --no-cache --virtual .build-deps \
|
||||
|
|
|
|||
|
|
@ -71,6 +71,7 @@ COPY src/deps/requirements.txt /tmp/requirements-deps.txt
|
|||
COPY src/autoconf/requirements.txt /tmp/req/requirements-autoconf.txt
|
||||
COPY src/scheduler/requirements.txt /tmp/req/requirements-scheduler.txt
|
||||
COPY src/ui/requirements.txt /tmp/req/requirements-ui.txt
|
||||
COPY src/api/requirements.txt /tmp/req/requirements-api.txt
|
||||
COPY src/common/gen/requirements.txt /tmp/req/requirements-gen.txt
|
||||
COPY src/common/db/requirements.txt /tmp/req/requirements-db.txt
|
||||
COPY src/common/db/requirements.armv7.txt /tmp/req/requirements-db.armv7.txt
|
||||
|
|
@ -98,6 +99,7 @@ COPY src/common/utils utils
|
|||
COPY src/autoconf autoconf
|
||||
COPY src/scheduler scheduler
|
||||
COPY src/ui ui
|
||||
COPY src/api api
|
||||
COPY src/all-in-one all-in-one
|
||||
COPY src/VERSION VERSION
|
||||
COPY misc/*.ascii misc/
|
||||
|
|
@ -179,8 +181,8 @@ RUN cp helpers/bwcli /usr/bin/ && \
|
|||
chown -R root:nginx INTEGRATION /data /etc/nginx /var/cache/bunkerweb /var/lib/bunkerweb /var/www/html /etc/bunkerweb /var/tmp/bunkerweb /var/run/bunkerweb /var/log/bunkerweb /usr/bin/bwcli && \
|
||||
chmod -R 770 /data /etc/nginx /var/cache/bunkerweb /var/lib/bunkerweb /var/www/html /etc/bunkerweb /var/tmp/bunkerweb /var/log/bunkerweb /var/run/bunkerweb && \
|
||||
chmod 2770 /var/tmp/bunkerweb && \
|
||||
find . \( -path './all-in-one' -o -path './scheduler' -o -path './ui' -o -path './autoconf' -o -path './cli' -o -path './lua' -o -path './core' -o -path './db' -o -path './gen' -o -path './helpers' -o -path './deps' \) -prune -o -type f -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
find all-in-one scheduler ui autoconf cli lua core db gen helpers deps -type f ! -path 'deps/bin/*' ! -path 'deps/python/bin/*' ! -name '*.lua' ! -name '*.py' ! -name '*.pyc' ! -name '*.sh' ! -name '*.so' -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
find . \( -path './all-in-one' -o -path './api' -o -path './scheduler' -o -path './ui' -o -path './autoconf' -o -path './cli' -o -path './lua' -o -path './core' -o -path './db' -o -path './gen' -o -path './utils' -o -path './helpers' -o -path './deps' \) -prune -o -type f -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
find all-in-one api scheduler ui autoconf cli lua core db gen utils helpers deps -type f ! -path 'deps/bin/*' ! -path 'deps/python/bin/*' ! -name '*.lua' ! -name '*.py' ! -name '*.pyc' ! -name '*.sh' ! -name '*.so' -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
chmod 770 -R db/alembic && \
|
||||
chmod 550 entrypoint.sh && \
|
||||
chmod 660 INTEGRATION && \
|
||||
|
|
@ -227,13 +229,14 @@ LABEL bunkerweb.INSTANCE="bunkerweb"
|
|||
|
||||
VOLUME /data
|
||||
|
||||
EXPOSE 8080/tcp 8443/tcp 8443/udp 7000/tcp
|
||||
EXPOSE 8080/tcp 8443/tcp 8443/udp 7000/tcp 8888/tcp
|
||||
|
||||
HEALTHCHECK --interval=10s --timeout=10s --start-period=30s --retries=6 CMD /usr/share/bunkerweb/helpers/healthcheck-all-in-one.sh
|
||||
|
||||
ENV PYTHONPATH="/usr/share/bunkerweb/deps/python:/usr/share/bunkerweb/db"
|
||||
ENV SERVICE_UI="yes"
|
||||
ENV SERVICE_SCHEDULER="yes"
|
||||
ENV SERVICE_API="no"
|
||||
ENV AUTOCONF_MODE="no"
|
||||
ENV USE_CROWDSEC="no"
|
||||
ENV USE_REDIS="yes"
|
||||
|
|
|
|||
|
|
@ -172,6 +172,17 @@ else
|
|||
log "ENTRYPOINT" "ℹ️" "UI service is disabled, autostart not enabled"
|
||||
fi
|
||||
|
||||
# Enable autorestart for API service if enabled
|
||||
if [ "${SERVICE_API}" = "yes" ]; then
|
||||
export API_LISTEN_ADDR="${API_LISTEN_ADDR:-${LISTEN_ADDR:-0.0.0.0}}"
|
||||
export API_LISTEN_PORT="${API_LISTEN_PORT:-${LISTEN_PORT:-8888}}"
|
||||
sed -i 's/autorestart=false/autorestart=true/' /etc/supervisor.d/api.ini
|
||||
log "ENTRYPOINT" "✅" "Enabled autorestart for API service"
|
||||
else
|
||||
sed -i 's/autostart=true/autostart=false/' /etc/supervisor.d/api.ini
|
||||
log "ENTRYPOINT" "ℹ️" "API service is disabled, autostart not enabled"
|
||||
fi
|
||||
|
||||
# Enable autorestart for scheduler service if enabled
|
||||
if [ "${SERVICE_SCHEDULER}" = "yes" ]; then
|
||||
sed -i 's/autorestart=false/autorestart=true/' /etc/supervisor.d/scheduler.ini
|
||||
|
|
|
|||
22
src/all-in-one/supervisor.d/api.ini
Normal file
22
src/all-in-one/supervisor.d/api.ini
Normal file
|
|
@ -0,0 +1,22 @@
|
|||
[program:api]
|
||||
command=sh -c 'if [ "${SERVICE_API}" = "yes" ]; then exec /usr/share/bunkerweb/api/entrypoint.sh 2>&1 | sed "s/^/[API] /" | tee /var/log/bunkerweb/api.log; else echo "[API] Disabled by SERVICE_API setting" && exit 0; fi'
|
||||
directory=/usr/share/bunkerweb/api
|
||||
user=nginx
|
||||
group=nginx
|
||||
stdout_logfile=/proc/1/fd/1
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/proc/1/fd/2
|
||||
stderr_logfile_maxbytes=0
|
||||
environment=PYTHONPATH="/usr/share/bunkerweb/deps/python",LISTEN_ADDR="%(ENV_API_LISTEN_ADDR)s",LISTEN_PORT="%(ENV_API_LISTEN_PORT)s",FORWARDED_ALLOW_IPS="%(ENV_API_FORWARDED_ALLOW_IPS)s"
|
||||
priority=25
|
||||
startsecs=5
|
||||
startretries=3
|
||||
stopwaitsecs=30
|
||||
stopsignal=TERM
|
||||
autostart=true
|
||||
autorestart=false
|
||||
redirect_stderr=false
|
||||
killasgroup=true
|
||||
stopasgroup=true
|
||||
stdout_events_enabled=true
|
||||
stderr_events_enabled=true
|
||||
78
src/api/Dockerfile
Normal file
78
src/api/Dockerfile
Normal file
|
|
@ -0,0 +1,78 @@
|
|||
FROM python:3.13-alpine@sha256:9ba6d8cbebf0fb6546ae71f2a1c14f6ffd2fdab83af7fa5669734ef30ad48844 AS builder
|
||||
|
||||
ARG TARGETPLATFORM
|
||||
|
||||
# Build-time deps
|
||||
RUN apk add --no-cache build-base libffi-dev cargo
|
||||
|
||||
# Copy requirements
|
||||
COPY src/deps/requirements.txt /tmp/requirements-deps.txt
|
||||
COPY src/api/requirements.txt /tmp/req/requirements-api.txt
|
||||
COPY src/common/gen/requirements.txt /tmp/req/requirements-gen.txt
|
||||
COPY src/common/db/requirements.txt /tmp/req/requirements-db.txt
|
||||
COPY src/common/db/requirements.armv7.txt /tmp/req/requirements-db.armv7.txt
|
||||
|
||||
WORKDIR /usr/share/bunkerweb
|
||||
|
||||
# Install python requirements
|
||||
RUN export MAKEFLAGS="-j$(nproc)" && \
|
||||
if [ "$TARGETPLATFORM" = "linux/arm/v7" ] ; then mv /tmp/req/requirements-db.armv7.txt /tmp/req/requirements-db.txt ; else rm -f /tmp/req/requirements-db.armv7.txt ; fi && \
|
||||
pip install --no-cache-dir --require-hashes --break-system-packages -r /tmp/requirements-deps.txt && \
|
||||
pip install --no-cache-dir --require-hashes --target deps/python $(for file in $(ls /tmp/req/requirements*.txt) ; do echo "-r ${file}" ; done | xargs)
|
||||
|
||||
# Copy shared code and service
|
||||
COPY src/common/api api
|
||||
COPY src/common/db db
|
||||
COPY src/common/utils utils
|
||||
COPY src/common/helpers helpers
|
||||
COPY src/VERSION VERSION
|
||||
COPY src/api api
|
||||
|
||||
FROM python:3.13-alpine@sha256:9ba6d8cbebf0fb6546ae71f2a1c14f6ffd2fdab83af7fa5669734ef30ad48844
|
||||
|
||||
RUN umask 027
|
||||
|
||||
COPY src/deps/requirements.txt /tmp/requirements-deps.txt
|
||||
|
||||
# Install python requirements
|
||||
RUN export MAKEFLAGS="-j$(nproc)" && \
|
||||
pip install --no-cache-dir --require-hashes --break-system-packages -r /tmp/requirements-deps.txt
|
||||
|
||||
# Install runtime dependencies and add api user
|
||||
RUN apk add --no-cache bash unzip mariadb-connector-c mariadb-client postgresql-client sqlite tzdata && \
|
||||
addgroup -g 101 api && \
|
||||
adduser -h /usr/share/bunkerweb/api -g api -s /sbin/nologin -G api -D -H -u 101 --disabled-password api
|
||||
|
||||
# Copy deps and app
|
||||
COPY --from=builder --chown=0:101 --chmod=550 /usr/share/bunkerweb /usr/share/bunkerweb
|
||||
|
||||
WORKDIR /usr/share/bunkerweb
|
||||
|
||||
# Set up minimal data dirs and permissions
|
||||
RUN echo "Docker" > INTEGRATION && \
|
||||
mkdir -p /etc/bunkerweb /var/tmp/bunkerweb /var/run/bunkerweb /var/log/bunkerweb && \
|
||||
mkdir -p /data/lib && ln -s /data/lib /var/lib/bunkerweb && \
|
||||
chown -R root:api INTEGRATION /data /var/lib/bunkerweb /etc/bunkerweb /var/tmp/bunkerweb /var/run/bunkerweb /var/log/bunkerweb && \
|
||||
chmod -R 770 /data /var/lib/bunkerweb /etc/bunkerweb /var/tmp/bunkerweb /var/run/bunkerweb /var/log/bunkerweb && \
|
||||
find . \( -path './api' -o -path './db' -o -path './utils' -o -path './helpers' -o -path './deps' \) -prune -o -type f -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
find api db utils helpers deps -type f ! -path 'deps/python/bin/*' ! -name '*.py' ! -name '*.pyc' ! -name '*.sh' ! -name '*.so' -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
chmod 660 INTEGRATION
|
||||
|
||||
LABEL maintainer="Bunkerity <contact@bunkerity.com>"
|
||||
LABEL version="1.6.5-rc3"
|
||||
LABEL url="https://www.bunkerweb.io"
|
||||
LABEL bunkerweb.type="api"
|
||||
|
||||
VOLUME /data
|
||||
|
||||
EXPOSE 8888
|
||||
|
||||
WORKDIR /usr/share/bunkerweb/api
|
||||
|
||||
USER api:api
|
||||
|
||||
HEALTHCHECK --interval=10s --timeout=10s --start-period=30s --retries=6 CMD /usr/share/bunkerweb/helpers/healthcheck-api.sh
|
||||
|
||||
ENV PYTHONPATH="/usr/share/bunkerweb/deps/python"
|
||||
|
||||
ENTRYPOINT [ "./entrypoint.sh" ]
|
||||
1
src/api/app/__init__.py
Normal file
1
src/api/app/__init__.py
Normal file
|
|
@ -0,0 +1 @@
|
|||
"""BunkerWeb API package."""
|
||||
1
src/api/app/auth/__init__.py
Normal file
1
src/api/app/auth/__init__.py
Normal file
|
|
@ -0,0 +1 @@
|
|||
"""Auth package."""
|
||||
513
src/api/app/auth/biscuit.py
Normal file
513
src/api/app/auth/biscuit.py
Normal file
|
|
@ -0,0 +1,513 @@
|
|||
from contextlib import suppress
|
||||
from ipaddress import ip_address
|
||||
from traceback import format_exc
|
||||
from typing import Optional
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
from fastapi import HTTPException, Request
|
||||
from biscuit_auth import Authorizer, Biscuit, BiscuitValidationError, Check, Policy, PublicKey, AuthorizationError, Fact
|
||||
|
||||
from common_utils import get_version # type: ignore
|
||||
|
||||
from ..config import api_config
|
||||
from ..utils import BISCUIT_PUBLIC_KEY_FILE
|
||||
from .common import get_auth_header, parse_bearer_token
|
||||
|
||||
|
||||
OPERATION_BY_METHOD = {
|
||||
"GET": "read",
|
||||
"OPTIONS": "read",
|
||||
"POST": "write",
|
||||
"PUT": "write",
|
||||
"PATCH": "write",
|
||||
"DELETE": "write",
|
||||
}
|
||||
|
||||
# Default fine-grained permission verb mapping by method
|
||||
PERM_VERB_BY_METHOD = {
|
||||
"GET": "read",
|
||||
"OPTIONS": "read",
|
||||
"POST": "create",
|
||||
"PUT": "update",
|
||||
"PATCH": "update",
|
||||
"DELETE": "delete",
|
||||
}
|
||||
|
||||
|
||||
def _resolve_bans(path_normalized: str, method_u: str) -> tuple[Optional[str], Optional[str]]:
|
||||
"""Resolve bans endpoints to fine-grained permissions.
|
||||
|
||||
Supported endpoints:
|
||||
- GET /bans -> ban_read
|
||||
- POST /bans -> ban_created
|
||||
- POST /bans/ban -> ban_created
|
||||
- DELETE /bans -> ban_delete
|
||||
- POST /bans/unban -> ban_delete
|
||||
For any other path under /bans, fallback to verb-based mapping.
|
||||
"""
|
||||
rtype = "bans"
|
||||
p = path_normalized
|
||||
if p == "/bans":
|
||||
if method_u == "GET":
|
||||
return rtype, "ban_read"
|
||||
if method_u == "POST":
|
||||
return rtype, "ban_created"
|
||||
if method_u == "DELETE":
|
||||
return rtype, "ban_delete"
|
||||
elif p == "/bans/ban" and method_u == "POST":
|
||||
return rtype, "ban_created"
|
||||
elif p == "/bans/unban" and method_u == "POST":
|
||||
return rtype, "ban_delete"
|
||||
|
||||
verb = PERM_VERB_BY_METHOD.get(method_u)
|
||||
if verb == "read":
|
||||
return rtype, "ban_read"
|
||||
if verb == "update":
|
||||
return rtype, "ban_update"
|
||||
if verb == "delete":
|
||||
return rtype, "ban_delete"
|
||||
if verb == "create":
|
||||
return rtype, "ban_created"
|
||||
return rtype, None
|
||||
|
||||
|
||||
def _resolve_instances(path_normalized: str, method_u: str) -> tuple[Optional[str], Optional[str]]:
|
||||
"""Resolve instances endpoints to fine-grained permissions.
|
||||
|
||||
Supported endpoints:
|
||||
- GET /instances/health | /instances/ping -> instances_read
|
||||
- POST /instances/reload | legacy /reload -> instances_execute
|
||||
- POST /instances/stop | legacy /stop -> instances_execute
|
||||
Fallback: instances_<verb> based on method.
|
||||
"""
|
||||
rtype = "instances"
|
||||
p = path_normalized
|
||||
parts = [seg for seg in p.split("/") if seg]
|
||||
# Read actions
|
||||
if method_u in {"GET", "OPTIONS"}:
|
||||
if p in {"/instances/health", "/instances/ping"}:
|
||||
return rtype, "instances_read"
|
||||
# Support per-instance ping: /instances/{hostname}/ping
|
||||
if len(parts) == 3 and parts[0] == "instances" and parts[2] == "ping":
|
||||
return rtype, "instances_read"
|
||||
# Execute actions
|
||||
if method_u == "POST":
|
||||
if p in {"/instances/reload", "/reload", "/instances/stop", "/stop"}:
|
||||
return rtype, "instances_execute"
|
||||
# Support per-instance reload/stop: /instances/{hostname}/reload|stop
|
||||
if len(parts) == 3 and parts[0] == "instances" and parts[2] in {"reload", "stop"}:
|
||||
return rtype, "instances_execute"
|
||||
verb = PERM_VERB_BY_METHOD.get(method_u)
|
||||
if verb:
|
||||
return rtype, f"instances_{verb}"
|
||||
return rtype, None
|
||||
|
||||
|
||||
def _resolve_global_config(path_normalized: str, method_u: str) -> tuple[Optional[str], Optional[str]]:
|
||||
"""Resolve global_config endpoints to fine-grained permissions.
|
||||
|
||||
Supported endpoints:
|
||||
- GET /global_config -> global_config_read
|
||||
- POST|PUT|PATCH /global_config -> global_config_update
|
||||
Also accepts hyphenated path prefix (global-config) but canonicalizes rtype to global_config.
|
||||
"""
|
||||
rtype = "global_config"
|
||||
verb = PERM_VERB_BY_METHOD.get(method_u)
|
||||
if verb == "read":
|
||||
return rtype, "global_config_read"
|
||||
if verb in {"create", "update"}:
|
||||
return rtype, "global_config_update"
|
||||
# For DELETE or other methods, no fine-grained permission mapping
|
||||
return rtype, None
|
||||
|
||||
|
||||
def _resolve_services(path_normalized: str, method_u: str) -> tuple[Optional[str], Optional[str]]:
|
||||
"""Resolve services endpoints to fine-grained permissions.
|
||||
|
||||
Permissions are named with singular prefix (service_*), resource_type is plural "services".
|
||||
Special endpoints:
|
||||
- POST /services/convert -> service_convert
|
||||
- GET /services/export -> service_export
|
||||
CRUD:
|
||||
- GET /services or /services/{id} -> service_read
|
||||
- POST /services -> service_create
|
||||
- PUT|PATCH /services/{id} -> service_update
|
||||
- DELETE /services/{id} -> service_delete
|
||||
"""
|
||||
rtype = "services"
|
||||
p = path_normalized
|
||||
parts = [seg for seg in p.split("/") if seg]
|
||||
|
||||
# Special actions
|
||||
if p == "/services/convert" and method_u == "POST":
|
||||
return rtype, "service_convert"
|
||||
if p == "/services/export" and method_u in {"GET", "OPTIONS"}:
|
||||
return rtype, "service_export"
|
||||
|
||||
# Read
|
||||
if method_u in {"GET", "OPTIONS"}:
|
||||
if p == "/services" or (len(parts) == 2 and parts[0] == "services"):
|
||||
return rtype, "service_read"
|
||||
# Create
|
||||
if method_u == "POST" and p == "/services":
|
||||
return rtype, "service_create"
|
||||
# Update
|
||||
if method_u in {"PUT", "PATCH"} and len(parts) == 2 and parts[0] == "services":
|
||||
return rtype, "service_update"
|
||||
# Delete
|
||||
if method_u == "DELETE" and len(parts) == 2 and parts[0] == "services":
|
||||
return rtype, "service_delete"
|
||||
|
||||
# Fallback by verb if under services
|
||||
verb = PERM_VERB_BY_METHOD.get(method_u)
|
||||
if verb == "read":
|
||||
return rtype, "service_read"
|
||||
if verb == "create":
|
||||
return rtype, "service_create"
|
||||
if verb == "update":
|
||||
return rtype, "service_update"
|
||||
if verb == "delete":
|
||||
return rtype, "service_delete"
|
||||
return rtype, None
|
||||
|
||||
|
||||
def _resolve_configs(path_normalized: str, method_u: str) -> tuple[Optional[str], Optional[str]]:
|
||||
"""Resolve configs endpoints to fine-grained permissions (singular names).
|
||||
|
||||
Maps to: config_read, config_create, config_update, config_delete
|
||||
Examples:
|
||||
- GET /configs or /configs/{...} -> config_read
|
||||
- POST /configs or /configs/upload -> config_create
|
||||
- PUT|PATCH /configs/{service}/{type}/{name}[... ] -> config_update
|
||||
- DELETE /configs[...] -> config_delete
|
||||
"""
|
||||
rtype = "configs"
|
||||
p = path_normalized
|
||||
parts = [seg for seg in p.split("/") if seg]
|
||||
|
||||
# Read
|
||||
if method_u in {"GET", "OPTIONS"}:
|
||||
if p == "/configs":
|
||||
return rtype, "configs_read"
|
||||
if len(parts) >= 2 and parts[0] == "configs":
|
||||
return rtype, "config_read"
|
||||
# Create
|
||||
if method_u == "POST" and p in ("/configs", "/configs/upload"):
|
||||
return rtype, "config_create"
|
||||
# Update
|
||||
if method_u in {"PUT", "PATCH"} and len(parts) >= 2 and parts[0] == "configs":
|
||||
return rtype, "config_update"
|
||||
# Delete
|
||||
if method_u == "DELETE" and parts and parts[0] == "configs":
|
||||
return rtype, "config_delete"
|
||||
|
||||
# Fallback by verb
|
||||
verb = PERM_VERB_BY_METHOD.get(method_u)
|
||||
if verb == "read":
|
||||
return rtype, "config_read"
|
||||
if verb == "create":
|
||||
return rtype, "config_create"
|
||||
if verb == "update":
|
||||
return rtype, "config_update"
|
||||
if verb == "delete":
|
||||
return rtype, "config_delete"
|
||||
return rtype, None
|
||||
|
||||
|
||||
def _resolve_plugins(path_normalized: str, method_u: str) -> tuple[Optional[str], Optional[str]]:
|
||||
"""Resolve plugins endpoints to fine-grained permissions (singular names).
|
||||
|
||||
Maps to: plugin_read, plugin_create, plugin_delete
|
||||
Examples:
|
||||
- GET /plugins or /plugins/{id} -> plugin_read
|
||||
- POST /plugins/upload -> plugin_create
|
||||
- DELETE /plugins/{id} -> plugin_delete
|
||||
"""
|
||||
rtype = "plugins"
|
||||
p = path_normalized
|
||||
parts = [seg for seg in p.split("/") if seg]
|
||||
|
||||
if method_u in {"GET", "OPTIONS"}:
|
||||
if p == "/plugins" or (len(parts) == 2 and parts[0] == "plugins"):
|
||||
return rtype, "plugin_read"
|
||||
if method_u == "POST" and p == "/plugins/upload":
|
||||
return rtype, "plugin_create"
|
||||
if method_u == "DELETE" and len(parts) == 2 and parts[0] == "plugins":
|
||||
return rtype, "plugin_delete"
|
||||
|
||||
# Fallback by verb
|
||||
verb = PERM_VERB_BY_METHOD.get(method_u)
|
||||
if verb == "read":
|
||||
return rtype, "plugin_read"
|
||||
if verb == "create":
|
||||
return rtype, "plugin_create"
|
||||
if verb == "update":
|
||||
# No explicit update endpoint yet; treat as create for permission purposes
|
||||
return rtype, "plugin_create"
|
||||
if verb == "delete":
|
||||
return rtype, "plugin_delete"
|
||||
return rtype, None
|
||||
|
||||
|
||||
def _resolve_cache(path_normalized: str, method_u: str) -> tuple[Optional[str], Optional[str]]:
|
||||
"""Resolve cache endpoints to fine-grained permissions.
|
||||
|
||||
Supported endpoints (current router):
|
||||
- GET /cache -> cache_read
|
||||
- GET /cache/{service}/{plugin}/{job}/{file} -> cache_read
|
||||
- DELETE /cache -> cache_delete
|
||||
- DELETE /cache/{service}/{plugin}/{job}/{file} -> cache_delete
|
||||
|
||||
For any other method, fall back to coarse role-based auth.
|
||||
"""
|
||||
rtype = "cache"
|
||||
if method_u in {"GET", "OPTIONS"}:
|
||||
return rtype, "cache_read"
|
||||
if method_u == "DELETE":
|
||||
return rtype, "cache_delete"
|
||||
return rtype, None
|
||||
|
||||
|
||||
def _resolve_jobs(path_normalized: str, method_u: str) -> tuple[Optional[str], Optional[str]]:
|
||||
"""Resolve jobs endpoints to fine-grained permissions.
|
||||
|
||||
- GET /jobs -> job_read
|
||||
- GET /jobs/errors -> job_read
|
||||
- POST /jobs/run -> job_run
|
||||
"""
|
||||
rtype = "jobs"
|
||||
p = path_normalized
|
||||
if method_u in {"GET", "OPTIONS"}:
|
||||
if p in {"/jobs", "/jobs/errors"}:
|
||||
return rtype, "job_read"
|
||||
if method_u == "POST" and p == "/jobs/run":
|
||||
return rtype, "job_run"
|
||||
|
||||
# Fallback by verb
|
||||
verb = PERM_VERB_BY_METHOD.get(method_u)
|
||||
if verb == "read":
|
||||
return rtype, "job_read"
|
||||
return rtype, None
|
||||
|
||||
|
||||
def _resolve_resource_and_perm(path: str, method: str) -> tuple[Optional[str], Optional[str]]:
|
||||
"""Derive resource_type and required permission name from request path and method.
|
||||
|
||||
Rules:
|
||||
- Category (resource_type) is the first path segment ("/x/..." -> x).
|
||||
- Bans:
|
||||
- GET /bans -> ban_read
|
||||
- POST /ban -> ban_created
|
||||
- POST /unban -> ban_delete
|
||||
- Else map by method: ban_update/ban_delete/ban_read/ban_created
|
||||
- Instances:
|
||||
- POST /reload or /stop -> instances_execute
|
||||
- Else instances_<verb>
|
||||
- Generic fallback:
|
||||
- {resource_type}_{verb}
|
||||
Returns (resource_type, permission) or (None, None) if not applicable.
|
||||
"""
|
||||
# Normalize and split path (ignore trailing slash)
|
||||
if not path.startswith("/"):
|
||||
path = "/" + path
|
||||
p = path.rstrip("/") or "/"
|
||||
parts = [seg for seg in p.split("/") if seg]
|
||||
if not parts:
|
||||
return None, None
|
||||
|
||||
first = parts[0].lower()
|
||||
method_u = method.upper()
|
||||
|
||||
# Bans category
|
||||
if first == "bans" or p.startswith("/bans"):
|
||||
return _resolve_bans(p, method_u)
|
||||
|
||||
# Instances special cases
|
||||
if first in {"instances", "reload", "stop"}:
|
||||
return _resolve_instances(p, method_u)
|
||||
# Global config special cases (canonicalize hyphenated version)
|
||||
if first in {"global_config", "global-config"}:
|
||||
return _resolve_global_config(p, method_u)
|
||||
# Services special cases
|
||||
if first == "services":
|
||||
return _resolve_services(p, method_u)
|
||||
# Configs special cases
|
||||
if first == "configs":
|
||||
return _resolve_configs(p, method_u)
|
||||
# Plugins special cases
|
||||
if first == "plugins":
|
||||
return _resolve_plugins(p, method_u)
|
||||
# Jobs special cases
|
||||
if first == "jobs":
|
||||
return _resolve_jobs(p, method_u)
|
||||
# Cache special cases
|
||||
if first == "cache":
|
||||
return _resolve_cache(p, method_u)
|
||||
|
||||
# Generic mapping based on first segment
|
||||
rtype = first
|
||||
verb = PERM_VERB_BY_METHOD.get(method_u)
|
||||
if not verb:
|
||||
return rtype, None
|
||||
return rtype, f"{rtype}_{verb}"
|
||||
|
||||
|
||||
def _extract_resource_id(path: str, rtype: Optional[str]) -> Optional[str]:
|
||||
"""Extract a resource identifier from the path when possible.
|
||||
|
||||
Convention: /<rtype>/<id>/... -> id
|
||||
For action-only routes like /reload, /stop, /ban, /unban, returns None.
|
||||
"""
|
||||
if not rtype:
|
||||
return None
|
||||
if not path.startswith("/"):
|
||||
path = "/" + path
|
||||
parts = [p for p in path.split("/") if p]
|
||||
if not parts:
|
||||
return None
|
||||
# Skip known action-like endpoints without IDs
|
||||
if parts[0] in {"reload", "stop", "ban", "unban", "bans", "global_config", "global-config"}:
|
||||
return None
|
||||
# Skip services action endpoints when extracting ID
|
||||
if parts[0] == "services" and len(parts) >= 2 and parts[1] in {"convert", "export"}:
|
||||
return None
|
||||
# Skip upload pseudo-id segments for configs/plugins
|
||||
if parts[0] in {"configs", "plugins"} and len(parts) >= 2 and parts[1] == "upload":
|
||||
return None
|
||||
# Skip jobs pseudo-action segments
|
||||
if parts[0] == "jobs" and len(parts) >= 2 and parts[1] in {"run", "errors"}:
|
||||
return None
|
||||
# For instances category, skip action subpaths like /instances/ping, /instances/reload, /instances/stop
|
||||
if parts[0] == "instances" and len(parts) >= 2 and parts[1] in {"ping", "reload", "stop"}:
|
||||
return None
|
||||
# Second segment is considered the resource_id if present
|
||||
if len(parts) >= 2:
|
||||
return parts[1]
|
||||
return None
|
||||
|
||||
|
||||
class BiscuitGuard:
|
||||
def __init__(self) -> None:
|
||||
from ..utils import LOGGER # local import to avoid cycles
|
||||
|
||||
self._logger = LOGGER
|
||||
self._public_key: Optional[PublicKey] = None
|
||||
self._load_public_key()
|
||||
|
||||
def _load_public_key(self) -> None:
|
||||
try:
|
||||
hex_key = BISCUIT_PUBLIC_KEY_FILE.read_text(encoding="utf-8").strip()
|
||||
if not hex_key:
|
||||
raise ValueError("Biscuit public key file is empty")
|
||||
self._public_key = PublicKey.from_hex(hex_key)
|
||||
self._logger.debug("Biscuit public key loaded successfully")
|
||||
except Exception as e:
|
||||
with suppress(Exception):
|
||||
self._logger.debug(f"Failed to load Biscuit public key: {e}")
|
||||
raise RuntimeError(f"Failed to load Biscuit public key: {e}")
|
||||
|
||||
def __call__(self, request: Request) -> None:
|
||||
# Skip auth for health, login, and OpenAPI/docs endpoints
|
||||
self._logger.debug(f"Biscuit start: {request.method} {request.url.path} from {request.client.host if request.client else 'unknown'}")
|
||||
path = request.url.path
|
||||
openapi_match = api_config.openapi_url and path == api_config.openapi_url
|
||||
docs_match = api_config.docs_url and path.startswith(api_config.docs_url)
|
||||
redoc_match = api_config.redoc_url and path.startswith(api_config.redoc_url)
|
||||
if path in ("/health", "/ping") or bool(openapi_match) or bool(docs_match) or path.startswith("/auth") or bool(redoc_match):
|
||||
self._logger.debug(f"Biscuit skip for path: {path}")
|
||||
return
|
||||
|
||||
authz = get_auth_header(request)
|
||||
token_str = parse_bearer_token(authz)
|
||||
if not token_str:
|
||||
self._logger.debug("Biscuit missing bearer token")
|
||||
raise HTTPException(status_code=401, detail="Unauthorized")
|
||||
try:
|
||||
assert self._public_key is not None
|
||||
token = Biscuit.from_base64(token_str, self._public_key)
|
||||
self._logger.debug("Biscuit token parsed and verified against public key")
|
||||
except BiscuitValidationError:
|
||||
self._logger.debug(f"Biscuit token validation error:\n{format_exc()}")
|
||||
raise HTTPException(status_code=401, detail="Unauthorized")
|
||||
except Exception:
|
||||
self._logger.debug(f"Biscuit token parsing failed with unexpected error:\n{format_exc()}")
|
||||
raise HTTPException(status_code=401, detail="Unauthorized")
|
||||
|
||||
# Phase 1: freshness and IP binding
|
||||
try:
|
||||
az = Authorizer()
|
||||
az.add_token(token)
|
||||
az.add_check(Check(f'check if version("{get_version()}")'))
|
||||
# Enforce token issuance time not older than configured TTL
|
||||
try:
|
||||
ttl_s = int(api_config.biscuit_ttl_seconds)
|
||||
except Exception:
|
||||
ttl_s = 0
|
||||
if ttl_s > 0:
|
||||
cutoff = datetime.now(timezone.utc) - timedelta(seconds=ttl_s)
|
||||
# Token contains a typed datetime literal via time(<iso8601>)
|
||||
az.add_check(Check(f"check if time($t), $t >= {cutoff.isoformat()}"))
|
||||
self._logger.debug(f"Biscuit phase1: enforce TTL={ttl_s}s (cutoff={cutoff.isoformat()})")
|
||||
|
||||
client_ip = request.client.host if request.client else "0.0.0.0"
|
||||
if api_config.check_private_ip or not ip_address(client_ip).is_private:
|
||||
az.add_check(Check(f'check if client_ip("{client_ip}")'))
|
||||
self._logger.debug(f"Biscuit phase1: enforce client_ip={client_ip} (check_private_ip={api_config.check_private_ip})")
|
||||
else:
|
||||
self._logger.debug(f"Biscuit phase1: skip client_ip check for private IP {client_ip}")
|
||||
|
||||
az.add_policy(Policy("allow if true"))
|
||||
self._logger.debug("Biscuit phase1: authorizing freshness/IP checks")
|
||||
az.authorize()
|
||||
self._logger.debug("Biscuit phase1: authorization success")
|
||||
except AuthorizationError:
|
||||
self._logger.debug(f"Biscuit phase1: authorization failed (AuthorizationError):\n{format_exc()}")
|
||||
raise HTTPException(status_code=401, detail="Unauthorized")
|
||||
except Exception:
|
||||
self._logger.debug(f"Biscuit phase1: authorization failed (unexpected error):\n{format_exc()}")
|
||||
raise HTTPException(status_code=401, detail="Unauthorized")
|
||||
|
||||
# Phase 2: route authorization (coarse and fine-grained)
|
||||
try:
|
||||
az = Authorizer()
|
||||
az.add_token(token)
|
||||
|
||||
# Always add operation fact for observability
|
||||
operation = OPERATION_BY_METHOD.get(request.method.upper(), "read")
|
||||
az.add_fact(Fact(f'operation("{operation}")'))
|
||||
self._logger.debug(f"Biscuit phase2: operation={operation}")
|
||||
|
||||
# Derive fine-grained context
|
||||
rtype, req_perm = _resolve_resource_and_perm(request.url.path, request.method)
|
||||
if rtype and req_perm:
|
||||
az.add_fact(Fact(f'resource("{request.url.path}")'))
|
||||
az.add_fact(Fact(f'resource_type("{rtype}")'))
|
||||
az.add_fact(Fact(f'required_perm("{req_perm}")'))
|
||||
rid = _extract_resource_id(request.url.path, rtype)
|
||||
if rid is not None:
|
||||
az.add_fact(Fact(f'resource_id("{rid}")'))
|
||||
self._logger.debug(f"Biscuit phase2: rtype={rtype}, required_perm={req_perm}, resource_id={rid if rid is not None else '*none*'}")
|
||||
|
||||
# Enforce fine-grained authorization
|
||||
az.add_policy(Policy("allow if admin(true)"))
|
||||
# Global grant (resource_id == "*")
|
||||
az.add_policy(Policy('allow if api_perm($rt, "*", $perm), required_perm($perm), resource_type($rt)'))
|
||||
# Specific resource grant
|
||||
az.add_policy(Policy("allow if api_perm($rt, $rid, $perm), required_perm($perm), resource_type($rt), resource_id($rid)"))
|
||||
else:
|
||||
# Fallback to coarse role-based authorization when no fine-grained mapping exists
|
||||
az.add_policy(Policy("allow if role($role, $perms), operation($op), $perms.contains($op)"))
|
||||
self._logger.debug("Biscuit phase2: fallback to coarse role-based authorization")
|
||||
|
||||
self._logger.debug("Biscuit phase2: authorizing route access")
|
||||
az.authorize()
|
||||
self._logger.debug("Biscuit phase2: authorization success")
|
||||
except AuthorizationError:
|
||||
self._logger.debug(f"Biscuit phase2: authorization failed (AuthorizationError):\n{format_exc()}")
|
||||
raise HTTPException(status_code=403, detail="Forbidden")
|
||||
except Exception:
|
||||
self._logger.debug(f"Biscuit phase2: authorization failed (unexpected error):\n{format_exc()}")
|
||||
raise HTTPException(status_code=403, detail="Forbidden")
|
||||
|
||||
|
||||
guard = BiscuitGuard()
|
||||
17
src/api/app/auth/common.py
Normal file
17
src/api/app/auth/common.py
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
from typing import Optional
|
||||
from fastapi import Request
|
||||
|
||||
|
||||
def get_auth_header(request: Request) -> str:
|
||||
"""Return the Authorization header (case-insensitive) or empty string."""
|
||||
return request.headers.get("Authorization") or request.headers.get("authorization") or ""
|
||||
|
||||
|
||||
def parse_bearer_token(auth_header: str) -> Optional[str]:
|
||||
"""Parse Bearer auth header and return the token string or None."""
|
||||
if not auth_header or not auth_header.lower().startswith("bearer "):
|
||||
return None
|
||||
try:
|
||||
return auth_header.split(" ", 1)[1].strip()
|
||||
except Exception:
|
||||
return None
|
||||
120
src/api/app/auth/guard.py
Normal file
120
src/api/app/auth/guard.py
Normal file
|
|
@ -0,0 +1,120 @@
|
|||
from hmac import compare_digest
|
||||
from time import time
|
||||
from fastapi import HTTPException, Request, Depends
|
||||
from fastapi.security import HTTPBasic, HTTPBasicCredentials
|
||||
from ..utils import LOGGER
|
||||
from ..config import api_config
|
||||
from .biscuit import guard as biscuit_guard
|
||||
from ..utils import check_password, get_api_db
|
||||
from .common import get_auth_header, parse_bearer_token
|
||||
|
||||
|
||||
security = HTTPBasic(auto_error=False)
|
||||
|
||||
|
||||
class BiscuitWithAdminBearer:
|
||||
"""Authorization dependency:
|
||||
- If API_TOKEN is configured and matches Bearer token, allow full access (admin-like).
|
||||
- If HTTP Basic auth is provided and matches an admin API user, allow full access.
|
||||
- If Authorization is Bearer but doesn't match API_TOKEN (or no API_TOKEN), defer to Biscuit guard for ACL-based authorization.
|
||||
- Skips health, ping, and docs.
|
||||
"""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._logger = LOGGER
|
||||
# username -> (is_admin, password_hash_bytes, expires_at)
|
||||
self._admin_cache: dict[str, tuple[bool, bytes, float]] = {}
|
||||
self._cache_ttl = 30.0 # seconds
|
||||
|
||||
async def __call__(self, request: Request, credentials: HTTPBasicCredentials | None = Depends(security)) -> None:
|
||||
# Skip auth for health, login, and OpenAPI/docs endpoints
|
||||
self._logger.debug(f"Auth start: {request.method} {request.url.path} from {request.client.host if request.client else 'unknown'}")
|
||||
path = request.url.path
|
||||
openapi_match = api_config.openapi_url and path == api_config.openapi_url
|
||||
docs_match = api_config.docs_url and path.startswith(api_config.docs_url)
|
||||
redoc_match = api_config.redoc_url and path.startswith(api_config.redoc_url)
|
||||
if path in ("/health", "/ping") or bool(openapi_match) or bool(docs_match) or bool(redoc_match) or path.startswith("/auth"):
|
||||
self._logger.debug(f"Auth skip for path: {path}")
|
||||
return
|
||||
|
||||
# Read Authorization header once
|
||||
authz = get_auth_header(request)
|
||||
scheme = (authz.split(" ", 1)[0].lower() if authz else "").strip(":") or "none"
|
||||
self._logger.debug(f"Authorization scheme detected: {scheme}")
|
||||
|
||||
# First path: HTTP Basic for admin users
|
||||
if credentials is not None:
|
||||
username = credentials.username or ""
|
||||
password = credentials.password or ""
|
||||
self._logger.debug(f"Basic auth provided for user={username}")
|
||||
|
||||
# Validate against API users and require admin (with cache)
|
||||
now = time()
|
||||
hit = self._admin_cache.get(username)
|
||||
is_admin: bool
|
||||
pwd_hash: bytes
|
||||
if hit and hit[2] > now:
|
||||
is_admin, pwd_hash, _exp = hit
|
||||
self._logger.debug(f"Admin cache hit for user={username}, expires_in={int(hit[2]-now)}s")
|
||||
else:
|
||||
db = get_api_db(log=False)
|
||||
user = db.get_api_user(username=username, as_dict=True)
|
||||
is_admin = bool(user.get("admin")) if user else False
|
||||
raw_hash = (user.get("password") if user else b"") or b""
|
||||
if not isinstance(raw_hash, (bytes, bytearray)):
|
||||
try:
|
||||
raw_hash = str(raw_hash or "").encode("utf-8")
|
||||
except Exception:
|
||||
raw_hash = b""
|
||||
pwd_hash = bytes(raw_hash)
|
||||
# Negative and positive cache
|
||||
self._admin_cache[username] = (is_admin, pwd_hash, now + self._cache_ttl)
|
||||
self._logger.debug(f"Admin cache {'populate' if user else 'miss'} for user={username}, is_admin={is_admin}")
|
||||
|
||||
if not is_admin:
|
||||
self._logger.warning(
|
||||
f"Auth failed (basic user not admin or not found): user={username} {request.method} {request.url.path} from {request.client.host if request.client else 'unknown'}"
|
||||
)
|
||||
raise HTTPException(status_code=401, detail="Unauthorized")
|
||||
if not pwd_hash or not check_password(password, pwd_hash):
|
||||
self._logger.warning(
|
||||
f"Auth failed (basic password mismatch): user={username} {request.method} {request.url.path} from {request.client.host if request.client else 'unknown'}"
|
||||
)
|
||||
raise HTTPException(status_code=401, detail="Unauthorized")
|
||||
self._logger.debug(f"Auth success via Basic admin: user={username}")
|
||||
return # Full access for admin via Basic
|
||||
|
||||
# Second path: API token as admin override (Bearer) or Biscuit
|
||||
api_token = api_config.API_TOKEN
|
||||
if authz.lower().startswith("bearer "):
|
||||
provided = parse_bearer_token(authz) or ""
|
||||
if api_token and compare_digest(provided, api_token):
|
||||
self._logger.debug("Auth success via admin Bearer token (API_TOKEN)")
|
||||
return # Full access via admin Bearer
|
||||
# Not the admin token (or no API_TOKEN set): try Biscuit ACL
|
||||
try:
|
||||
self._logger.debug("Delegating to Biscuit guard (Bearer)")
|
||||
biscuit_guard(request)
|
||||
self._logger.debug("Biscuit guard success (Bearer path)")
|
||||
return
|
||||
except HTTPException as e:
|
||||
# Bubble up after logging
|
||||
self._logger.warning(
|
||||
f"Auth failed (biscuit {e.status_code}): {request.method} {request.url.path} from {request.client.host if request.client else 'unknown'} reason={getattr(e, 'detail', '')}"
|
||||
)
|
||||
raise
|
||||
|
||||
# Else rely on Biscuit token for ACL (will raise Missing Bearer if none)
|
||||
try:
|
||||
self._logger.debug("Delegating to Biscuit guard (no Basic/Bearer admin)")
|
||||
biscuit_guard(request)
|
||||
self._logger.debug("Biscuit guard success (default path)")
|
||||
except HTTPException as e:
|
||||
# Log Biscuit authorization failures without exposing token
|
||||
self._logger.warning(
|
||||
f"Auth failed (biscuit {e.status_code}): {request.method} {request.url.path} from {request.client.host if request.client else 'unknown'} reason={getattr(e, 'detail', '')}"
|
||||
)
|
||||
raise
|
||||
|
||||
|
||||
guard = BiscuitWithAdminBearer()
|
||||
169
src/api/app/config.py
Normal file
169
src/api/app/config.py
Normal file
|
|
@ -0,0 +1,169 @@
|
|||
from os import getenv, sep
|
||||
from os.path import join
|
||||
from sys import path as sys_path
|
||||
from typing import Optional, Union
|
||||
|
||||
# Ensure shared libs are importable when running in container
|
||||
for deps_path in [join(sep, "usr", "share", "bunkerweb", *paths) for paths in (("deps", "python"), ("utils",), ("api",), ("db",))]:
|
||||
if deps_path not in sys_path:
|
||||
sys_path.append(deps_path)
|
||||
|
||||
|
||||
def _bool_env(name: str, default: bool = False) -> bool:
|
||||
val = getenv(name)
|
||||
if val is None:
|
||||
return default
|
||||
return val.lower() in ("1", "true", "yes", "on")
|
||||
|
||||
|
||||
from .yaml_base_settings import YamlBaseSettings, YamlSettingsConfigDict # type: ignore
|
||||
|
||||
|
||||
class ApiConfig(YamlBaseSettings):
|
||||
"""API runtime configuration loaded from YAML/env/secrets with sensible defaults.
|
||||
|
||||
Reading order:
|
||||
1. Environment variables
|
||||
2. Secrets files
|
||||
3. YAML file
|
||||
4. .env file
|
||||
5. Defaults below
|
||||
"""
|
||||
|
||||
# Biscuit
|
||||
CHECK_PRIVATE_IP: bool | str = "yes" # allow "yes"/"no" in YAML
|
||||
# Biscuit token lifetime (seconds). 0 or "off" disables expiry.
|
||||
API_BISCUIT_TTL_SECONDS: int | str = 3600
|
||||
|
||||
# FastAPI runtime toggles
|
||||
API_DOCS_URL: Optional[str] = "/docs"
|
||||
API_REDOC_URL: Optional[str] = "/redoc"
|
||||
API_OPENAPI_URL: Optional[str] = "/openapi.json"
|
||||
API_ROOT_PATH: Optional[str] = None
|
||||
|
||||
# Auth: simple Bearer token fallback
|
||||
API_TOKEN: Optional[str] = None
|
||||
|
||||
# Whitelist
|
||||
API_WHITELIST_ENABLED: bool | str = "yes"
|
||||
API_WHITELIST_IPS: str = "192.168.0.0/16 172.16.0.0/12 10.0.0.0/8"
|
||||
|
||||
# Rate limiting
|
||||
API_RATE_LIMIT_ENABLED: bool | str = "yes"
|
||||
API_RATE_LIMIT_STORAGE_OPTIONS: Optional[str] = None
|
||||
API_RATE_LIMIT_STRATEGY: str = "fixed-window"
|
||||
API_RATE_LIMIT_HEADERS_ENABLED: bool | str = "yes"
|
||||
API_RATE_LIMIT_TIMES: int | str = 100
|
||||
API_RATE_LIMIT_SECONDS: int | str = 60
|
||||
API_RATE_LIMIT_AUTH_TIMES: int | str = 10
|
||||
API_RATE_LIMIT_AUTH_SECONDS: int | str = 60
|
||||
API_RATE_LIMIT_RULES: Optional[Union[str, object]] = None
|
||||
API_RATE_LIMIT_DEFAULTS: Optional[str] = None
|
||||
API_RATE_LIMIT_APPLICATION_LIMITS: Optional[str] = None
|
||||
API_RATE_LIMIT_KEY: str = "ip"
|
||||
API_RATE_LIMIT_EXEMPT_IPS: Optional[str] = None
|
||||
|
||||
model_config = YamlSettingsConfigDict( # type: ignore
|
||||
yaml_file=getenv("SETTINGS_YAML_FILE", "/etc/bunkerweb/api.yml"),
|
||||
env_file=getenv("SETTINGS_ENV_FILE", "/etc/bunkerweb/api.env"),
|
||||
secrets_dir=getenv("SETTINGS_SECRETS_DIR", "/run/secrets"),
|
||||
env_file_encoding="utf-8",
|
||||
extra="allow",
|
||||
)
|
||||
|
||||
# --- Properties mapped to old Settings interface ---
|
||||
@property
|
||||
def check_private_ip(self) -> bool:
|
||||
val = str(self.CHECK_PRIVATE_IP).strip().lower()
|
||||
return val in ("1", "true", "yes", "on")
|
||||
|
||||
@staticmethod
|
||||
def _maybe_url(value: Optional[str], default: Optional[str]) -> Optional[str]:
|
||||
if value is None:
|
||||
return default
|
||||
lowered = value.strip().lower()
|
||||
return None if lowered in ("", "no", "none", "disabled", "off", "false", "0") else value
|
||||
|
||||
@property
|
||||
def docs_url(self) -> Optional[str]:
|
||||
return self._maybe_url(self.API_DOCS_URL, "/docs")
|
||||
|
||||
@property
|
||||
def redoc_url(self) -> Optional[str]:
|
||||
return self._maybe_url(self.API_REDOC_URL, "/redoc")
|
||||
|
||||
@property
|
||||
def openapi_url(self) -> Optional[str]:
|
||||
return self._maybe_url(self.API_OPENAPI_URL, "/openapi.json")
|
||||
|
||||
@property
|
||||
def whitelist_enabled(self) -> bool:
|
||||
v = str(self.API_WHITELIST_ENABLED).strip().lower()
|
||||
return v in ("1", "true", "yes", "on")
|
||||
|
||||
@property
|
||||
def biscuit_ttl_seconds(self) -> int:
|
||||
"""Return Biscuit token TTL in seconds; 0 means disabled."""
|
||||
raw = str(self.API_BISCUIT_TTL_SECONDS).strip().lower()
|
||||
if raw in ("off", "disabled", "none", "false", "no", ""):
|
||||
return 0
|
||||
try:
|
||||
v = int(float(raw)) # allow strings/numeric
|
||||
return max(0, v)
|
||||
except Exception:
|
||||
return 3600
|
||||
|
||||
# Rate limiting mapped properties
|
||||
@property
|
||||
def rate_limit_enabled(self) -> bool:
|
||||
v = str(self.API_RATE_LIMIT_ENABLED).strip().lower()
|
||||
return v in ("1", "true", "yes", "on")
|
||||
|
||||
@property
|
||||
def rate_limit_headers_enabled(self) -> bool:
|
||||
v = str(self.API_RATE_LIMIT_HEADERS_ENABLED).strip().lower()
|
||||
return v in ("1", "true", "yes", "on")
|
||||
|
||||
@property
|
||||
def rate_limit_times(self) -> int:
|
||||
return int(self.API_RATE_LIMIT_TIMES) # type: ignore[arg-type]
|
||||
|
||||
@property
|
||||
def rate_limit_seconds(self) -> int:
|
||||
return int(self.API_RATE_LIMIT_SECONDS) # type: ignore[arg-type]
|
||||
|
||||
@property
|
||||
def rate_limit_auth_times(self) -> int:
|
||||
return int(self.API_RATE_LIMIT_AUTH_TIMES) # type: ignore[arg-type]
|
||||
|
||||
@property
|
||||
def rate_limit_auth_seconds(self) -> int:
|
||||
return int(self.API_RATE_LIMIT_AUTH_SECONDS) # type: ignore[arg-type]
|
||||
|
||||
# Internal API resolution, keeping DB-sourced fallbacks
|
||||
@property
|
||||
def internal_api_port(self) -> str:
|
||||
try:
|
||||
from .utils import get_db # late import to avoid cycles
|
||||
|
||||
cfg = get_db(log=False).get_config(global_only=True, methods=False, filtered_settings=("API_HTTP_PORT",))
|
||||
return str(cfg.get("API_HTTP_PORT", "5000"))
|
||||
except Exception:
|
||||
return "5000"
|
||||
|
||||
@property
|
||||
def internal_api_host_header(self) -> str:
|
||||
try:
|
||||
from .utils import get_db # late import to avoid cycles
|
||||
|
||||
cfg = get_db(log=False).get_config(global_only=True, methods=False, filtered_settings=("API_SERVER_NAME",))
|
||||
return str(cfg.get("API_SERVER_NAME", "bwapi"))
|
||||
except Exception:
|
||||
return "bwapi"
|
||||
|
||||
@property
|
||||
def internal_endpoint(self) -> str:
|
||||
return f"http://127.0.0.1:{self.internal_api_port}"
|
||||
|
||||
|
||||
api_config = ApiConfig()
|
||||
39
src/api/app/deps.py
Normal file
39
src/api/app/deps.py
Normal file
|
|
@ -0,0 +1,39 @@
|
|||
from API import API # type: ignore
|
||||
from ApiCaller import ApiCaller # type: ignore
|
||||
from fastapi import HTTPException
|
||||
|
||||
from .config import api_config
|
||||
from .utils import get_db
|
||||
|
||||
|
||||
def get_internal_api() -> API:
|
||||
"""Dependency that returns the internal NGINX API client."""
|
||||
return API(api_config.internal_endpoint, api_config.internal_api_host_header)
|
||||
|
||||
|
||||
def get_instances_api_caller() -> ApiCaller:
|
||||
"""Build an ApiCaller targeting all known instances from the database."""
|
||||
db = get_db(log=False)
|
||||
apis = []
|
||||
try:
|
||||
for inst in db.get_instances():
|
||||
try:
|
||||
endpoint = f"http://{inst['hostname']}:{inst['port']}"
|
||||
host = inst.get("server_name") or inst.get("name") or "bwapi"
|
||||
apis.append(API(endpoint, host))
|
||||
except Exception:
|
||||
continue
|
||||
except Exception:
|
||||
# Fallback to internal API only if DB access fails
|
||||
apis.append(API(api_config.internal_endpoint, api_config.internal_api_host_header))
|
||||
return ApiCaller(apis)
|
||||
|
||||
|
||||
def get_api_for_hostname(hostname: str) -> API:
|
||||
"""Dependency returning a single API client targeting the given hostname."""
|
||||
inst = get_db(log=False).get_instance(hostname)
|
||||
if not inst:
|
||||
raise HTTPException(status_code=404, detail=f"Instance {hostname} not found")
|
||||
endpoint = f"http://{inst['hostname']}:{inst['port']}"
|
||||
host = inst.get("server_name") or inst.get("name") or "bwapi"
|
||||
return API(endpoint, host)
|
||||
177
src/api/app/main.py
Normal file
177
src/api/app/main.py
Normal file
|
|
@ -0,0 +1,177 @@
|
|||
from contextlib import suppress
|
||||
from os import sep
|
||||
from os.path import join
|
||||
from sys import path as sys_path
|
||||
|
||||
from fastapi import FastAPI, HTTPException, Request
|
||||
from fastapi.responses import JSONResponse
|
||||
from traceback import format_exc
|
||||
from ipaddress import ip_address, ip_network, IPv4Network, IPv6Network
|
||||
import re
|
||||
|
||||
for deps_path in [join(sep, "usr", "share", "bunkerweb", *paths) for paths in (("deps", "python"), ("utils",), ("api",), ("db",))]:
|
||||
if deps_path not in sys_path:
|
||||
sys_path.append(deps_path)
|
||||
|
||||
from common_utils import get_version # type: ignore
|
||||
|
||||
from .routers.core import router as core_router
|
||||
from .utils import LOGGER
|
||||
from .rate_limit import setup_rate_limiter, limiter_dep_dynamic
|
||||
from .config import api_config
|
||||
|
||||
BUNKERWEB_VERSION = get_version()
|
||||
|
||||
|
||||
def create_app() -> FastAPI:
|
||||
app = FastAPI(
|
||||
title="BunkerWeb API",
|
||||
description=description,
|
||||
summary="The API used by BunkerWeb to communicate with the database and the instances",
|
||||
version=BUNKERWEB_VERSION,
|
||||
contact={"name": "BunkerWeb Team", "url": "https://www.bunkerweb.io", "email": "contact@bunkerity.com"},
|
||||
license_info={"name": "GNU Affero General Public License v3.0", "url": "https://github.com/bunkerity/bunkerweb/blob/master/LICENSE.md"},
|
||||
openapi_tags=tags_metadata,
|
||||
docs_url=api_config.docs_url,
|
||||
redoc_url=api_config.redoc_url,
|
||||
openapi_url=api_config.openapi_url,
|
||||
root_path=api_config.API_ROOT_PATH or "",
|
||||
)
|
||||
|
||||
# Optional IP whitelist (enabled by default, can be disabled)
|
||||
whitelist_networks: list[IPv4Network | IPv6Network] = []
|
||||
if api_config.whitelist_enabled:
|
||||
raw_whitelist = api_config.API_WHITELIST_IPS.strip()
|
||||
if raw_whitelist:
|
||||
for tok in re.split(r"[\s,]+", raw_whitelist):
|
||||
if not tok:
|
||||
continue
|
||||
try:
|
||||
if "/" in tok:
|
||||
whitelist_networks.append(ip_network(tok, strict=False))
|
||||
else:
|
||||
ipobj = ip_address(tok)
|
||||
cidr = f"{ipobj.exploded}/32" if ipobj.version == 4 else f"{ipobj.exploded}/128"
|
||||
whitelist_networks.append(ip_network(cidr, strict=False))
|
||||
except ValueError:
|
||||
LOGGER.error(f"Invalid IP/CIDR in API whitelist: {tok}")
|
||||
except Exception:
|
||||
LOGGER.error(f"Error parsing API whitelist entry {tok}: {format_exc()}")
|
||||
continue
|
||||
|
||||
if whitelist_networks:
|
||||
|
||||
@app.middleware("http")
|
||||
async def whitelist_middleware(request: Request, call_next): # pragma: no cover
|
||||
cip = request.client.host if request.client else "0.0.0.0"
|
||||
ipobj = ip_address(cip)
|
||||
for net in whitelist_networks:
|
||||
if ipobj in net:
|
||||
return await call_next(request)
|
||||
LOGGER.warning(f"Blocking API request from non-whitelisted IP {request.client.host if request.client else 'unknown'}")
|
||||
return JSONResponse(status_code=403, content={"status": "error", "message": "forbidden"})
|
||||
|
||||
# Rate limiter (optional, safe if disabled)
|
||||
setup_rate_limiter(app)
|
||||
|
||||
# Inject rate limit headers on successful responses when enabled
|
||||
@app.middleware("http")
|
||||
async def rate_limit_headers_middleware(request: Request, call_next): # pragma: no cover
|
||||
response = await call_next(request)
|
||||
limiter = getattr(app.state, "limiter", None)
|
||||
if limiter is not None:
|
||||
# Only inject when slowapi has computed the current limit
|
||||
current = getattr(request.state, "view_rate_limit", None)
|
||||
if current is not None:
|
||||
limiter._inject_asgi_headers(response.headers, current)
|
||||
return response
|
||||
|
||||
# Routers with optional dynamic per-endpoint rate limiting
|
||||
app.include_router(core_router, dependencies=[limiter_dep_dynamic()])
|
||||
|
||||
# Error normalization
|
||||
@app.exception_handler(HTTPException)
|
||||
async def http_exception_handler(_request: Request, exc: HTTPException):
|
||||
if exc.status_code == 500:
|
||||
# Emit full traceback at debug level to aid diagnostics
|
||||
with suppress(Exception):
|
||||
LOGGER.debug(f"HTTPException 500: {exc}\n{format_exc()}")
|
||||
detail = exc.detail if isinstance(exc.detail, str) else "error"
|
||||
return JSONResponse(status_code=exc.status_code, content={"status": "error", "message": detail})
|
||||
|
||||
# Log tracebacks for unexpected errors (500)
|
||||
@app.exception_handler(Exception)
|
||||
async def unhandled_exception_handler(_request: Request, exc: Exception):
|
||||
# Emit full traceback at debug level to aid diagnostics
|
||||
with suppress(Exception):
|
||||
LOGGER.debug(f"Unhandled exception: {exc}\n{format_exc()}")
|
||||
return JSONResponse(status_code=500, content={"status": "error", "message": "internal error"})
|
||||
|
||||
return app
|
||||
|
||||
|
||||
description = (
|
||||
"""# BunkerWeb Internal API
|
||||
|
||||
This API is the internal control plane for BunkerWeb. It manages configuration, instances, plugins, bans, and scheduler artefacts and should remain on a trusted network.
|
||||
|
||||
## Feature overview
|
||||
|
||||
- Core: `/ping` and `/health` offer lightweight liveness probes.
|
||||
- Auth: `POST /auth` exchanges Basic credentials or the admin override token for a Biscuit; admin users may also authenticate with HTTP Basic directly.
|
||||
- Instances: register, list, update, and remove instances or broadcast `/ping`, `/reload`, and `/stop` to all or specific hosts.
|
||||
- Global config: `GET`/`PATCH /global_config` read or update API-owned global settings without touching other sources.
|
||||
- Services: create, rename, toggle draft/online modes, convert, and delete services while keeping prefixed variables consistent.
|
||||
- Custom configs: manage HTTP/stream/ModSecurity/CRS snippets via JSON payloads or uploads with `GET`/`POST`/`PATCH`/`DELETE /configs`.
|
||||
- Bans: aggregate current bans and orchestrate ban/unban operations across instances with flexible bulk payloads.
|
||||
- Plugins: list and install/remove UI plugins from supported archive formats with checksum validation.
|
||||
- Cache & jobs: inspect, download, or purge job cache files and trigger scheduler jobs via `/cache` and `/jobs` endpoints.
|
||||
|
||||
All responses are normalised; errors return `{ "status": "error", "message": "..." }` with appropriate status codes.
|
||||
|
||||
## Access and security
|
||||
|
||||
- Authentication flows:
|
||||
- Login endpoint: send `Authorization: Basic <base64(username:password)>`, or provide `username`/`password` as form or JSON to `/auth`; it returns a Biscuit token you use as `Authorization: Bearer <token>` for subsequent calls.
|
||||
- Admin override: if `API_TOKEN` is set, `Authorization: Bearer <API_TOKEN>` grants full access without Biscuit.
|
||||
- Direct Basic admin: protected endpoints also accept HTTP Basic when the user is an admin; no Biscuit is required in that case.
|
||||
- Biscuit token contents: facts such as `user(<username>)`, `time(<utc-iso>)`, `client_ip(<ip>)`, `domain(<host>)`, `version(<bw-version>)`; a coarse role `role("api_user", ["read"[, "write"]])`; and either `admin(true)` for admins or fine-grained permission facts like `api_perm(<resource_type>, <resource_id|*>, <permission>)` based on DB permissions. Keys are stored at `/var/lib/bunkerweb/.api_biscuit_private_key` (signing) and `/var/lib/bunkerweb/.api_biscuit_public_key` (verification).
|
||||
- Biscuit checks: verification uses the public key and enforces freshness/IP binding, then authorizes routes by mapping path/method to a required permission (for example, bans and instances have specialised mappings) with wildcard (`*`) or specific resource IDs; the guard falls back to coarse read/write when no fine-grained mapping applies.
|
||||
- Passwords: API user passwords are stored as bcrypt hashes and validated on login.
|
||||
- IP allowlist: when enabled, only requests from allowed IPs/CIDRs can reach the API.
|
||||
- Rate limiting: configurable global and auth-specific limits; headers are injected when enabled.
|
||||
|
||||
Example header:
|
||||
|
||||
```
|
||||
Authorization: Bearer <your_token_here>
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Settings can be provided via `/etc/bunkerweb/api.yml`, `/etc/bunkerweb/api.env`, and `/run/secrets` (environment variables take precedence). Common keys include:
|
||||
|
||||
- `API_DOCS_URL`, `API_REDOC_URL`, `API_OPENAPI_URL`, `API_ROOT_PATH`: documentation and OpenAPI exposure.
|
||||
- `API_TOKEN`: optional admin Bearer token used at `/auth`.
|
||||
- `API_WHITELIST_IPS`: space/comma-separated IPs/CIDRs for the allowlist.
|
||||
- `API_RATE_LIMIT_*`: knobs to enable/shape rate limiting.
|
||||
- `API_BISCUIT_TTL_SECONDS`: lifetime of Biscuit tokens in seconds (0 disables expiry; default 3600).
|
||||
|
||||
"""
|
||||
+ f"See the [BunkerWeb documentation](https://docs.bunkerweb.io/{BUNKERWEB_VERSION}/api/) for more details."
|
||||
) # noqa: E501
|
||||
|
||||
tags_metadata = [
|
||||
{"name": "core", "description": "Health probes and global utility endpoints"},
|
||||
{"name": "auth", "description": "Authentication and Biscuit issuance"},
|
||||
{"name": "bans", "description": "Operations related to ban management"},
|
||||
{"name": "instances", "description": "Operations related to instance management"},
|
||||
{"name": "global_config", "description": "Operations related to global configuration"},
|
||||
{"name": "services", "description": "Operations related to service management"},
|
||||
{"name": "configs", "description": "Operations related to custom NGINX configs"},
|
||||
{"name": "plugins", "description": "Operations related to plugin management"},
|
||||
{"name": "cache", "description": "Operations related to job cache files"},
|
||||
{"name": "jobs", "description": "Operations related to scheduler jobs"},
|
||||
]
|
||||
|
||||
app = create_app()
|
||||
0
src/api/app/models/__init__.py
Normal file
0
src/api/app/models/__init__.py
Normal file
326
src/api/app/models/api_database.py
Normal file
326
src/api/app/models/api_database.py
Normal file
|
|
@ -0,0 +1,326 @@
|
|||
from datetime import datetime
|
||||
from logging import Logger
|
||||
from os import sep
|
||||
from os.path import join
|
||||
from sys import path as sys_path
|
||||
from typing import Optional, Union
|
||||
|
||||
|
||||
for deps_path in [join(sep, "usr", "share", "bunkerweb", *paths) for paths in (("deps", "python"), ("utils",), ("api",), ("db",))]:
|
||||
if deps_path not in sys_path:
|
||||
sys_path.append(deps_path)
|
||||
|
||||
"""
|
||||
APIDatabase: API-specific user accessors respecting API models.
|
||||
|
||||
Note: Method signatures keep UI-compatible parameters for callers,
|
||||
but only API model fields are used/stored.
|
||||
"""
|
||||
|
||||
from Database import Database # type: ignore
|
||||
from model import API_RESOURCE_ENUM, API_PERMISSION_ENUM, API_users, API_permissions # type: ignore
|
||||
|
||||
|
||||
class APIDatabase(Database):
|
||||
def __init__(self, logger: Logger, sqlalchemy_string: Optional[str] = None, *, pool: Optional[bool] = None, log: bool = True, **kwargs) -> None:
|
||||
super().__init__(logger, sqlalchemy_string, external=True, pool=pool, log=log, **kwargs)
|
||||
|
||||
# API-convention methods
|
||||
def get_api_user(self, *, username: Optional[str] = None, as_dict: bool = False) -> Optional[Union[API_users, dict]]:
|
||||
"""Get API user. If username is None, return the first admin user."""
|
||||
with self._db_session() as session:
|
||||
query = session.query(API_users)
|
||||
query = query.filter_by(username=username) if username else query.filter_by(admin=True)
|
||||
|
||||
api_user = query.first()
|
||||
if not api_user:
|
||||
return None
|
||||
|
||||
if not as_dict:
|
||||
return api_user
|
||||
|
||||
return {
|
||||
"username": api_user.username,
|
||||
"password": api_user.password.encode("utf-8"),
|
||||
"method": api_user.method,
|
||||
"admin": api_user.admin,
|
||||
"creation_date": api_user.creation_date.astimezone(),
|
||||
"update_date": api_user.update_date.astimezone(),
|
||||
}
|
||||
|
||||
def create_api_user(
|
||||
self,
|
||||
username: str,
|
||||
password: bytes,
|
||||
*,
|
||||
creation_date: Optional[datetime] = None,
|
||||
method: str = "manual",
|
||||
admin: bool = False,
|
||||
) -> str:
|
||||
"""Create API user (API fields only)."""
|
||||
with self._db_session() as session:
|
||||
if self.readonly:
|
||||
return "The database is read-only, the changes will not be saved"
|
||||
|
||||
if admin and session.query(API_users).with_entities(API_users.username).filter_by(admin=True).first():
|
||||
return "An admin user already exists"
|
||||
|
||||
user = session.query(API_users).with_entities(API_users.username).filter_by(username=username).first()
|
||||
if user:
|
||||
return f"User {username} already exists"
|
||||
|
||||
current_time = datetime.now().astimezone()
|
||||
session.add(
|
||||
API_users(
|
||||
username=username,
|
||||
password=password.decode("utf-8"),
|
||||
method=method,
|
||||
admin=admin,
|
||||
creation_date=creation_date or current_time,
|
||||
update_date=current_time,
|
||||
)
|
||||
)
|
||||
|
||||
try:
|
||||
session.commit()
|
||||
except BaseException as e:
|
||||
return str(e)
|
||||
|
||||
return ""
|
||||
|
||||
def has_api_user(self) -> bool:
|
||||
"""Return True if at least one API user exists."""
|
||||
with self._db_session() as session:
|
||||
return session.query(API_users).first() is not None
|
||||
|
||||
def list_api_users(self):
|
||||
"""Return list of (username, admin) for all API users."""
|
||||
with self._db_session() as session:
|
||||
rows = session.query(API_users).with_entities(API_users.username, API_users.admin).all()
|
||||
return [(u, bool(a)) for (u, a) in rows]
|
||||
|
||||
# --------------------
|
||||
# Permissions (ACL)
|
||||
# --------------------
|
||||
def _allowed_resources(self) -> set:
|
||||
"""Return the set of allowed resource types if available from the model enum."""
|
||||
return set(getattr(API_RESOURCE_ENUM, "enums", []) or [])
|
||||
|
||||
def _allowed_permissions(self) -> set:
|
||||
"""Return the set of allowed permission names if available from the model enum."""
|
||||
return set(getattr(API_PERMISSION_ENUM, "enums", []) or [])
|
||||
|
||||
def grant_api_permission(
|
||||
self,
|
||||
username: str,
|
||||
permission: str,
|
||||
*,
|
||||
resource_type: str,
|
||||
resource_id: Optional[str] = None,
|
||||
granted: bool = True,
|
||||
) -> str:
|
||||
"""Grant or update a specific permission for an API user.
|
||||
|
||||
Creates or updates a row in bw_api_user_permissions for the given
|
||||
(user, resource_type, resource_id, permission) with the provided granted flag.
|
||||
Returns an empty string on success or an error message on failure.
|
||||
"""
|
||||
with self._db_session() as session:
|
||||
if self.readonly:
|
||||
return "The database is read-only, the changes will not be saved"
|
||||
|
||||
# Validate inputs when possible
|
||||
if resource_type not in self._allowed_resources():
|
||||
return f"Invalid resource_type: {resource_type}"
|
||||
if permission not in self._allowed_permissions():
|
||||
return f"Invalid permission: {permission}"
|
||||
|
||||
user = session.query(API_users).filter_by(username=username).first()
|
||||
if not user:
|
||||
return f"User {username} doesn't exist"
|
||||
|
||||
now = datetime.now().astimezone()
|
||||
record = (
|
||||
session.query(API_permissions).filter_by(api_user=username, resource_type=resource_type, resource_id=resource_id, permission=permission).first()
|
||||
)
|
||||
|
||||
if record:
|
||||
record.granted = granted
|
||||
record.updated_at = now
|
||||
else:
|
||||
session.add(
|
||||
API_permissions(
|
||||
api_user=username,
|
||||
resource_type=resource_type,
|
||||
resource_id=resource_id,
|
||||
permission=permission,
|
||||
granted=granted,
|
||||
created_at=now,
|
||||
updated_at=now,
|
||||
)
|
||||
)
|
||||
|
||||
try:
|
||||
session.commit()
|
||||
except BaseException as e:
|
||||
return str(e)
|
||||
|
||||
return ""
|
||||
|
||||
def revoke_api_permission(
|
||||
self,
|
||||
username: str,
|
||||
permission: str,
|
||||
*,
|
||||
resource_type: str,
|
||||
resource_id: Optional[str] = None,
|
||||
hard_delete: bool = False,
|
||||
) -> str:
|
||||
"""Revoke a specific permission for an API user.
|
||||
|
||||
If hard_delete is True, delete the row; otherwise, mark granted=False.
|
||||
Returns an empty string on success or an error message on failure.
|
||||
"""
|
||||
with self._db_session() as session:
|
||||
if self.readonly:
|
||||
return "The database is read-only, the changes will not be saved"
|
||||
|
||||
if resource_type not in self._allowed_resources():
|
||||
return f"Invalid resource_type: {resource_type}"
|
||||
# Permit any permission value but prefer validating known permissions
|
||||
if permission not in self._allowed_permissions():
|
||||
return f"Invalid permission: {permission}"
|
||||
|
||||
record = (
|
||||
session.query(API_permissions).filter_by(api_user=username, resource_type=resource_type, resource_id=resource_id, permission=permission).first()
|
||||
)
|
||||
if not record:
|
||||
return ""
|
||||
|
||||
if hard_delete:
|
||||
session.delete(record)
|
||||
else:
|
||||
record.granted = False
|
||||
record.updated_at = datetime.now().astimezone()
|
||||
|
||||
try:
|
||||
session.commit()
|
||||
except BaseException as e:
|
||||
return str(e)
|
||||
|
||||
return ""
|
||||
|
||||
def get_api_permissions(
|
||||
self,
|
||||
username: str,
|
||||
*,
|
||||
resource_type: Optional[str] = None,
|
||||
resource_id: Optional[str] = None,
|
||||
as_dict: bool = False,
|
||||
include_denied: bool = False,
|
||||
) -> Union[list, dict]:
|
||||
"""List permissions for an API user.
|
||||
|
||||
Filters by resource_type/resource_id when provided. By default returns only granted permissions unless include_denied=True.
|
||||
If as_dict=True, returns a nested mapping: {resource_type: {resource_id or "*": {permission: granted}}}.
|
||||
"""
|
||||
with self._db_session() as session:
|
||||
q = session.query(API_permissions).filter_by(api_user=username)
|
||||
if resource_type is not None:
|
||||
q = q.filter_by(resource_type=resource_type)
|
||||
if resource_id is not None:
|
||||
q = q.filter_by(resource_id=resource_id)
|
||||
if not include_denied:
|
||||
q = q.filter_by(granted=True)
|
||||
|
||||
rows = q.all()
|
||||
|
||||
if not as_dict:
|
||||
return rows
|
||||
|
||||
result: dict = {}
|
||||
for row in rows:
|
||||
rtype = row.resource_type
|
||||
rid = row.resource_id or "*"
|
||||
result.setdefault(rtype, {})
|
||||
result[rtype].setdefault(rid, {})
|
||||
result[rtype][rid][row.permission] = bool(row.granted)
|
||||
return result
|
||||
|
||||
def check_api_permission(
|
||||
self,
|
||||
username: str,
|
||||
permission: str,
|
||||
*,
|
||||
resource_type: Optional[str] = None,
|
||||
resource_id: Optional[str] = None,
|
||||
) -> bool:
|
||||
"""Check if the user has the given permission.
|
||||
|
||||
- Admin users are always allowed.
|
||||
- If resource_type is provided, checks both specific (resource_id) and global (resource_id is NULL) grants.
|
||||
- If resource_type is None, checks any grant across resource types for that permission name.
|
||||
"""
|
||||
with self._db_session() as session:
|
||||
user = session.query(API_users).filter_by(username=username).first()
|
||||
if not user:
|
||||
return False
|
||||
if bool(user.admin):
|
||||
return True
|
||||
|
||||
q = session.query(API_permissions).filter_by(api_user=username, permission=permission, granted=True)
|
||||
|
||||
if resource_type is None:
|
||||
return session.query(q.exists()).scalar() or False
|
||||
|
||||
q = q.filter_by(resource_type=resource_type)
|
||||
if resource_id is None:
|
||||
# Global grant only
|
||||
q_global = q.filter_by(resource_id=None)
|
||||
return session.query(q_global.exists()).scalar() or False
|
||||
|
||||
# Prefer specific resource grant, fallback to global
|
||||
q_specific = q.filter_by(resource_id=resource_id)
|
||||
if session.query(q_specific.exists()).scalar():
|
||||
return True
|
||||
q_global = q.filter_by(resource_id=None)
|
||||
return session.query(q_global.exists()).scalar() or False
|
||||
|
||||
def update_api_user(
|
||||
self,
|
||||
username: str,
|
||||
password: bytes,
|
||||
*,
|
||||
old_username: Optional[str] = None,
|
||||
method: str = "manual",
|
||||
) -> str:
|
||||
"""Update API user (API fields only)."""
|
||||
old_username = old_username or username
|
||||
with self._db_session() as session:
|
||||
if self.readonly:
|
||||
return "The database is read-only, the changes will not be saved"
|
||||
|
||||
user = session.query(API_users).filter_by(username=old_username).first()
|
||||
if not user:
|
||||
return f"User {old_username} doesn't exist"
|
||||
|
||||
# Handle rename if needed
|
||||
if username != old_username:
|
||||
if session.query(API_users).with_entities(API_users.username).filter_by(username=username).first():
|
||||
return f"User {username} already exists"
|
||||
|
||||
user.username = username
|
||||
|
||||
# Update related permissions ownership
|
||||
session.query(API_permissions).filter_by(api_user=old_username).update({"api_user": username})
|
||||
|
||||
user.password = password.decode("utf-8")
|
||||
user.method = method
|
||||
user.update_date = datetime.now().astimezone()
|
||||
|
||||
try:
|
||||
session.commit()
|
||||
except BaseException as e:
|
||||
return str(e)
|
||||
|
||||
return ""
|
||||
42
src/api/app/models/models.py
Normal file
42
src/api/app/models/models.py
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
from datetime import datetime
|
||||
from os.path import join, sep
|
||||
from sys import path as sys_path
|
||||
|
||||
for deps_path in [join(sep, "usr", "share", "bunkerweb", *paths) for paths in (("deps", "python"), ("utils",), ("db",))]:
|
||||
if deps_path not in sys_path:
|
||||
sys_path.append(deps_path)
|
||||
|
||||
from bcrypt import checkpw
|
||||
from flask_login import AnonymousUserMixin, UserMixin
|
||||
|
||||
from model import Users # type: ignore
|
||||
|
||||
|
||||
class AnonymousUser(AnonymousUserMixin):
|
||||
username = "Anonymous"
|
||||
email = None
|
||||
password = ""
|
||||
method = "manual"
|
||||
admin = False
|
||||
theme = "light"
|
||||
language = "en"
|
||||
totp_secret = None
|
||||
creation_date = datetime.now().astimezone()
|
||||
update_date = datetime.now().astimezone()
|
||||
list_roles = []
|
||||
list_permissions = []
|
||||
list_recovery_codes = []
|
||||
|
||||
def get_id(self):
|
||||
return self.username
|
||||
|
||||
def check_password(self, password: str) -> bool:
|
||||
return False
|
||||
|
||||
|
||||
class UiUsers(Users, UserMixin):
|
||||
def get_id(self):
|
||||
return self.username
|
||||
|
||||
def check_password(self, password: str) -> bool:
|
||||
return checkpw(password.encode("utf-8"), self.password.encode("utf-8"))
|
||||
580
src/api/app/rate_limit.py
Normal file
580
src/api/app/rate_limit.py
Normal file
|
|
@ -0,0 +1,580 @@
|
|||
from contextlib import suppress
|
||||
from csv import Sniffer, reader as csv_reader
|
||||
from ipaddress import IPv4Network, IPv6Network, ip_address, ip_network
|
||||
from json import dumps, loads
|
||||
from typing import List, Optional, Set, Tuple, Dict, Any, Union
|
||||
from io import StringIO
|
||||
from pathlib import Path
|
||||
|
||||
from fastapi import Depends, Request
|
||||
from fastapi.responses import Response
|
||||
|
||||
from regex import compile as regex_compile, Pattern, escape, fullmatch, search, split
|
||||
from slowapi import Limiter, _rate_limit_exceeded_handler
|
||||
from slowapi.errors import RateLimitExceeded
|
||||
from yaml import safe_load
|
||||
|
||||
from .config import api_config
|
||||
from os import getenv
|
||||
from .utils import LOGGER, get_db
|
||||
|
||||
_enabled: bool = False
|
||||
_limiter: Optional[Limiter] = None
|
||||
_default_limits: List[str] = []
|
||||
_application_limits: List[str] = []
|
||||
_exempt_networks: List[Union[IPv4Network, IPv6Network]] = []
|
||||
|
||||
|
||||
class _Rule:
|
||||
__slots__ = ("methods", "pattern", "times", "seconds", "raw")
|
||||
|
||||
def __init__(self, methods: Set[str], pattern: Pattern[str], times: int, seconds: int, raw: str):
|
||||
self.methods = methods
|
||||
self.pattern = pattern
|
||||
self.times = times
|
||||
self.seconds = seconds
|
||||
self.raw = raw
|
||||
|
||||
|
||||
_rules: List[_Rule] = []
|
||||
|
||||
|
||||
def _normalize_method(m: str) -> str:
|
||||
return m.strip().upper()
|
||||
|
||||
|
||||
def _compile_pattern(path: str) -> Pattern[str]:
|
||||
p = path.strip()
|
||||
if p.startswith("re:"):
|
||||
return regex_compile(p[3:])
|
||||
if "*" in p:
|
||||
return regex_compile("^" + escape(p).replace("\\*", ".*") + "$")
|
||||
return regex_compile("^" + escape(p) + "$")
|
||||
|
||||
|
||||
def _parse_rate(rate: str) -> Tuple[int, int]:
|
||||
"""Parse a rate limit item from a string.
|
||||
|
||||
Supported forms (mirrors limits' rate string notation):
|
||||
[count] [per|/] [n optional] [second|minute|hour|day|month|year]
|
||||
|
||||
Examples:
|
||||
10/hour, 10 per hour, 100/day, 500/7days, 200 per 30 minutes, 100/60
|
||||
"""
|
||||
r = rate.strip().lower().replace("per ", "/")
|
||||
if "/" not in r:
|
||||
raise ValueError("rate must be like '10/hour', '10 per hour', '10/60', '100/day', '500/7days'")
|
||||
left, right = r.split("/", 1)
|
||||
times = int(left.strip())
|
||||
right = right.strip()
|
||||
|
||||
# Fast path: plain integer seconds
|
||||
with suppress(Exception):
|
||||
seconds = int(right)
|
||||
return times, seconds
|
||||
|
||||
# Accept forms like "hour", "1 hour", "7days", "30 minutes", etc.
|
||||
m = fullmatch(r"(?:(\d+)\s*)?([a-z]+)", right)
|
||||
if not m:
|
||||
raise ValueError(f"invalid rate unit '{right}'")
|
||||
mult_str, unit = m.groups()
|
||||
mult = int(mult_str) if mult_str else 1
|
||||
|
||||
unit_seconds_map = {
|
||||
# seconds
|
||||
"s": 1,
|
||||
"sec": 1,
|
||||
"secs": 1,
|
||||
"second": 1,
|
||||
"seconds": 1,
|
||||
# minutes
|
||||
"m": 60,
|
||||
"min": 60,
|
||||
"mins": 60,
|
||||
"minute": 60,
|
||||
"minutes": 60,
|
||||
# hours
|
||||
"h": 3600,
|
||||
"hr": 3600,
|
||||
"hrs": 3600,
|
||||
"hour": 3600,
|
||||
"hours": 3600,
|
||||
# days
|
||||
"d": 86400,
|
||||
"day": 86400,
|
||||
"days": 86400,
|
||||
# months (30 days)
|
||||
"mo": 2592000,
|
||||
"mon": 2592000,
|
||||
"month": 2592000,
|
||||
"months": 2592000,
|
||||
# years (365 days)
|
||||
"y": 31536000,
|
||||
"yr": 31536000,
|
||||
"yrs": 31536000,
|
||||
"year": 31536000,
|
||||
"years": 31536000,
|
||||
}
|
||||
if unit not in unit_seconds_map:
|
||||
raise ValueError(f"unknown time unit '{unit}' in rate '{rate}'")
|
||||
seconds = mult * unit_seconds_map[unit]
|
||||
return times, seconds
|
||||
|
||||
|
||||
def _try_json(s: str):
|
||||
try:
|
||||
return loads(s)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def _load_rules_from_env(env_val: Optional[str]) -> List[_Rule]:
|
||||
if not env_val:
|
||||
return []
|
||||
raw = env_val.strip()
|
||||
if raw.startswith(("[", "{")):
|
||||
data = _try_json(raw)
|
||||
rules: List[_Rule] = []
|
||||
if isinstance(data, dict):
|
||||
for k, v in data.items():
|
||||
rate = str(v)
|
||||
parts = str(k).split()
|
||||
if len(parts) == 1:
|
||||
methods: Set[str] = set()
|
||||
path = parts[0]
|
||||
else:
|
||||
methods = {_normalize_method(x) for x in parts[0].split("|") if x}
|
||||
path = " ".join(parts[1:])
|
||||
times, seconds = _parse_rate(rate)
|
||||
rules.append(_Rule(methods, _compile_pattern(path), times, seconds, f"{k}={v}"))
|
||||
elif isinstance(data, list):
|
||||
for it in data:
|
||||
if not isinstance(it, dict):
|
||||
continue
|
||||
path = str(it.get("path", "/"))
|
||||
methods: Set[str] = set()
|
||||
m = it.get("methods")
|
||||
if isinstance(m, str):
|
||||
methods = {_normalize_method(x) for x in m.split("|") if x}
|
||||
elif isinstance(m, list):
|
||||
methods = {_normalize_method(str(x)) for x in m if x}
|
||||
times = int(it.get("times", api_config.rate_limit_times))
|
||||
seconds = int(it.get("seconds", api_config.rate_limit_seconds))
|
||||
rules.append(_Rule(methods, _compile_pattern(path), times, seconds, dumps(it)))
|
||||
return rules
|
||||
|
||||
# Shorthand CSV: "POST /auth 10/60, /instances* 100/60"
|
||||
rules: List[_Rule] = []
|
||||
for chunk in _csv_items(raw):
|
||||
s = chunk.strip()
|
||||
m = search(r"(\S+)$", s)
|
||||
if not m:
|
||||
continue
|
||||
rate = m.group(1)
|
||||
try:
|
||||
times, seconds = _parse_rate(rate)
|
||||
except Exception:
|
||||
continue
|
||||
head = s[: -len(rate)].strip()
|
||||
toks = head.split()
|
||||
if not toks:
|
||||
continue
|
||||
if "/" in toks[0]:
|
||||
methods: Set[str] = set()
|
||||
path = head
|
||||
else:
|
||||
methods = {_normalize_method(x) for x in toks[0].split("|") if x}
|
||||
path = " ".join(toks[1:]).strip()
|
||||
if not path:
|
||||
path = "/"
|
||||
rules.append(_Rule(methods, _compile_pattern(path), times, seconds, s))
|
||||
return rules
|
||||
|
||||
|
||||
def _load_rules_from_data(data) -> List[_Rule]:
|
||||
"""Load rules from a Python data structure (dict or list), e.g. parsed from YAML.
|
||||
|
||||
Supported forms mirror the JSON formats handled by _load_rules_from_env:
|
||||
- Dict mapping "METHODS PATH" or "PATH" -> "times/seconds" (string)
|
||||
- List of objects with keys: path, methods (string or list), times, seconds
|
||||
"""
|
||||
rules: List[_Rule] = []
|
||||
if isinstance(data, dict):
|
||||
for k, v in data.items():
|
||||
rate = str(v)
|
||||
parts = str(k).split()
|
||||
if len(parts) == 1:
|
||||
methods: Set[str] = set()
|
||||
path = parts[0]
|
||||
else:
|
||||
methods = {_normalize_method(x) for x in parts[0].split("|") if x}
|
||||
path = " ".join(parts[1:])
|
||||
times, seconds = _parse_rate(rate)
|
||||
rules.append(_Rule(methods, _compile_pattern(path), times, seconds, f"{k}={v}"))
|
||||
elif isinstance(data, list):
|
||||
for it in data:
|
||||
if not isinstance(it, dict):
|
||||
continue
|
||||
path = str(it.get("path", "/"))
|
||||
methods: Set[str] = set()
|
||||
m = it.get("methods")
|
||||
if isinstance(m, str):
|
||||
methods = {_normalize_method(x) for x in m.split("|") if x}
|
||||
elif isinstance(m, list):
|
||||
methods = {_normalize_method(str(x)) for x in m if x}
|
||||
times = int(it.get("times", api_config.rate_limit_times))
|
||||
seconds = int(it.get("seconds", api_config.rate_limit_seconds))
|
||||
rules.append(_Rule(methods, _compile_pattern(path), times, seconds, dumps(it)))
|
||||
return rules
|
||||
|
||||
|
||||
def _load_rules_from_file(path_str: str) -> List[_Rule]:
|
||||
"""Load rules from a file path.
|
||||
|
||||
Behavior:
|
||||
- If content looks like JSON, delegate to JSON/string loader.
|
||||
- Else try YAML via yaml.safe_load; if dict/list, parse accordingly.
|
||||
- Else fallback to CSV-like parsing using the string loader.
|
||||
"""
|
||||
try:
|
||||
p = Path(path_str)
|
||||
if not p.is_file():
|
||||
return []
|
||||
raw = p.read_text(encoding="utf-8").strip()
|
||||
if not raw:
|
||||
return []
|
||||
if raw.startswith(("[", "{")):
|
||||
return _load_rules_from_env(raw)
|
||||
with suppress(Exception):
|
||||
data = safe_load(raw)
|
||||
if isinstance(data, (list, dict)):
|
||||
return _load_rules_from_data(data)
|
||||
return _load_rules_from_env(raw)
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
|
||||
def _client_identifier(request: Request) -> str:
|
||||
"""Return the client IP address."""
|
||||
return request.client.host if request.client else "0.0.0.0"
|
||||
|
||||
|
||||
def _limit_string(times: int, seconds: int) -> str:
|
||||
# Prefer friendly units when we can express exactly
|
||||
if seconds in (1, 60, 3600, 86400, 2592000, 31536000):
|
||||
unit_map = {1: "second", 60: "minute", 3600: "hour", 86400: "day", 2592000: "month", 31536000: "year"}
|
||||
unit = unit_map[seconds]
|
||||
return f"{times}/{unit}"
|
||||
|
||||
for base, name in ((31536000, "year"), (2592000, "month"), (86400, "day"), (3600, "hour"), (60, "minute")):
|
||||
if seconds % base == 0:
|
||||
n = seconds // base
|
||||
plural = name + ("s" if n != 1 else "")
|
||||
return f"{times}/{n}{plural}"
|
||||
|
||||
return f"{times} per {seconds} seconds"
|
||||
|
||||
|
||||
def _parse_limits_list(env_val: Optional[str]) -> List[str]:
|
||||
if not env_val:
|
||||
return []
|
||||
raw = env_val.strip()
|
||||
if not raw:
|
||||
return []
|
||||
if raw.startswith("["):
|
||||
try:
|
||||
data = loads(raw)
|
||||
return [str(x).strip() for x in data if str(x).strip()]
|
||||
except Exception:
|
||||
return []
|
||||
# CSV (comma or semicolon), supports quoted items and newlines
|
||||
return _csv_items(raw)
|
||||
|
||||
|
||||
def _csv_items(s: str) -> List[str]:
|
||||
"""Parse a CSV-like string of items using csv module.
|
||||
|
||||
- Detects delimiter between comma and semicolon.
|
||||
- Handles quoted items and newlines.
|
||||
- Trims whitespace and drops empty fields.
|
||||
"""
|
||||
if not s:
|
||||
return []
|
||||
sample = s
|
||||
try:
|
||||
dialect = Sniffer().sniff(sample, delimiters=",;")
|
||||
delimiter = dialect.delimiter
|
||||
except Exception:
|
||||
# Fallback: prefer comma when present
|
||||
delimiter = "," if ("," in s or ";" not in s) else ";"
|
||||
items: List[str] = []
|
||||
reader = csv_reader(StringIO(s), delimiter=delimiter, skipinitialspace=True)
|
||||
for row in reader:
|
||||
for field in row:
|
||||
val = field.strip()
|
||||
if val:
|
||||
items.append(val)
|
||||
return items
|
||||
|
||||
|
||||
def _build_key_func():
|
||||
sel = (api_config.API_RATE_LIMIT_KEY or "ip").strip().lower()
|
||||
if sel == "ip":
|
||||
return _client_identifier
|
||||
if sel in ("ip_ua", "ip-user-agent", "ip-useragent"):
|
||||
|
||||
def _key(req: Request) -> str: # type: ignore[return-type]
|
||||
ip = _client_identifier(req)
|
||||
ua = req.headers.get("user-agent", "")
|
||||
return f"{ip}|{ua}"
|
||||
|
||||
return _key
|
||||
if sel.startswith("header:"):
|
||||
header_name = sel.split(":", 1)[1].strip()
|
||||
|
||||
def _key(req: Request) -> str: # type: ignore[return-type]
|
||||
val = req.headers.get(header_name) or req.headers.get(header_name.title()) or ""
|
||||
return val or _client_identifier(req)
|
||||
|
||||
return _key
|
||||
return _client_identifier
|
||||
|
||||
|
||||
def _parse_exempt_ips(env_val: Optional[str]) -> List[Union[IPv4Network, IPv6Network]]:
|
||||
nets: List[Union[IPv4Network, IPv6Network]] = []
|
||||
if not env_val:
|
||||
return nets
|
||||
for tok in split(r"[\s,]+", env_val.strip()):
|
||||
if not tok:
|
||||
continue
|
||||
try:
|
||||
if "/" in tok:
|
||||
nets.append(ip_network(tok, strict=False))
|
||||
else:
|
||||
# single IP
|
||||
ip = ip_address(tok)
|
||||
nets.append(ip_network(ip.exploded + ("/32" if ip.version == 4 else "/128"), strict=False))
|
||||
except Exception:
|
||||
continue
|
||||
return nets
|
||||
|
||||
|
||||
def is_enabled() -> bool:
|
||||
return _enabled
|
||||
|
||||
|
||||
def _match_rule(method: str, path: str) -> Optional[Tuple[int, int]]:
|
||||
if not _rules:
|
||||
return None
|
||||
m = _normalize_method(method)
|
||||
paths = [path]
|
||||
rp = (api_config.API_ROOT_PATH or "").rstrip("/")
|
||||
if rp and path.startswith(rp + "/"):
|
||||
paths.append(path[len(rp) :]) # noqa: E203
|
||||
for rule in _rules:
|
||||
if rule.methods and m not in rule.methods and "*" not in rule.methods:
|
||||
continue
|
||||
for p in paths:
|
||||
if rule.pattern.match(p):
|
||||
return rule.times, rule.seconds
|
||||
return None
|
||||
|
||||
|
||||
def limiter_dep_dynamic():
|
||||
async def _dep(request: Request): # pragma: no cover
|
||||
if not _enabled or _limiter is None:
|
||||
return None
|
||||
# Exempt IPs
|
||||
with suppress(Exception):
|
||||
cip = _client_identifier(request)
|
||||
ipobj = ip_address(cip)
|
||||
for net in _exempt_networks:
|
||||
if ipobj in net:
|
||||
return None
|
||||
method = request.method
|
||||
path = request.scope.get("path", "/")
|
||||
match = _match_rule(method, path)
|
||||
|
||||
async def _noop(request: Request, response: Response | None = None):
|
||||
return None
|
||||
|
||||
for lstr in _application_limits:
|
||||
try:
|
||||
# Pass a dummy response so slowapi can inject headers without crashing
|
||||
await _limiter.limit(lstr)(_noop)(request, response=Response()) # type: ignore[arg-type]
|
||||
except RateLimitExceeded:
|
||||
raise
|
||||
limit_strings: List[str] = []
|
||||
if match is None:
|
||||
if _default_limits:
|
||||
limit_strings = _default_limits.copy()
|
||||
else:
|
||||
limit_strings = [_limit_string(api_config.rate_limit_times, api_config.rate_limit_seconds)]
|
||||
else:
|
||||
t, s = match
|
||||
if t <= 0:
|
||||
return None
|
||||
limit_strings = [_limit_string(t, s)]
|
||||
|
||||
for lstr in limit_strings:
|
||||
try:
|
||||
# Pass a dummy response so slowapi can inject headers without crashing
|
||||
await _limiter.limit(lstr)(_noop)(request, response=Response()) # type: ignore[arg-type]
|
||||
except RateLimitExceeded:
|
||||
raise
|
||||
return None
|
||||
|
||||
return Depends(_dep)
|
||||
|
||||
|
||||
def setup_rate_limiter(app) -> None:
|
||||
if not api_config.rate_limit_enabled:
|
||||
LOGGER.info("API rate limiting disabled by configuration")
|
||||
return
|
||||
|
||||
global _limiter, _enabled, _rules, _default_limits, _application_limits, _exempt_networks
|
||||
|
||||
rules_val = getattr(api_config, "API_RATE_LIMIT_RULES", None)
|
||||
if isinstance(rules_val, (list, dict)):
|
||||
_rules = _load_rules_from_data(rules_val)
|
||||
elif isinstance(rules_val, str):
|
||||
# If the value is a path to a file, load from file; otherwise treat as inline string
|
||||
_rules = _load_rules_from_file(rules_val) if Path(rules_val).is_file() else _load_rules_from_env(rules_val)
|
||||
else:
|
||||
_rules = _load_rules_from_env(None)
|
||||
|
||||
# Optional storage options (JSON), can be augmented by Redis settings
|
||||
storage_options: Dict[str, Any] = {}
|
||||
storage: Optional[str] = None
|
||||
so_raw = api_config.API_RATE_LIMIT_STORAGE_OPTIONS
|
||||
if so_raw:
|
||||
so = _try_json(so_raw)
|
||||
if isinstance(so, dict):
|
||||
storage_options = {str(k): v for k, v in so.items()}
|
||||
|
||||
# Auto-derive Redis settings, preferring environment variables first, then database
|
||||
try:
|
||||
cfg = get_db(log=False).get_config(
|
||||
global_only=True,
|
||||
methods=False,
|
||||
filtered_settings=(
|
||||
"USE_REDIS",
|
||||
"REDIS_HOST",
|
||||
"REDIS_PORT",
|
||||
"REDIS_DATABASE",
|
||||
"REDIS_TIMEOUT",
|
||||
"REDIS_KEEPALIVE_POOL",
|
||||
"REDIS_SSL",
|
||||
"REDIS_SSL_VERIFY",
|
||||
"REDIS_USERNAME",
|
||||
"REDIS_PASSWORD",
|
||||
"REDIS_SENTINEL_HOSTS",
|
||||
"REDIS_SENTINEL_USERNAME",
|
||||
"REDIS_SENTINEL_PASSWORD",
|
||||
"REDIS_SENTINEL_MASTER",
|
||||
),
|
||||
)
|
||||
except Exception:
|
||||
cfg = {}
|
||||
|
||||
def _env_or_cfg(name: str, default: str | None = None) -> str | None:
|
||||
val = getenv(name)
|
||||
if val is not None:
|
||||
return val
|
||||
return cfg.get(name, default) # type: ignore[return-value]
|
||||
|
||||
storage = None
|
||||
use_redis = str(_env_or_cfg("USE_REDIS", "no") or "no").lower() == "yes"
|
||||
if use_redis:
|
||||
sentinels = str(_env_or_cfg("REDIS_SENTINEL_HOSTS", "") or "").strip()
|
||||
sentinel_master = str(_env_or_cfg("REDIS_SENTINEL_MASTER", "") or "").strip()
|
||||
username = str(_env_or_cfg("REDIS_USERNAME", "") or "").strip()
|
||||
password = str(_env_or_cfg("REDIS_PASSWORD", "") or "").strip()
|
||||
redis_ssl = str(_env_or_cfg("REDIS_SSL", "no") or "no").lower() == "yes"
|
||||
# timeouts are in ms; convert to seconds for redis client options
|
||||
try:
|
||||
timeout_ms = float(str(_env_or_cfg("REDIS_TIMEOUT", "1000") or "1000"))
|
||||
except Exception:
|
||||
timeout_ms = 1000.0
|
||||
try:
|
||||
keepalive_pool = int(str(_env_or_cfg("REDIS_KEEPALIVE_POOL", "10") or "10"))
|
||||
except Exception:
|
||||
keepalive_pool = 10
|
||||
|
||||
# Build options common to redis storages
|
||||
storage_options.setdefault("socket_timeout", timeout_ms / 1000.0)
|
||||
storage_options.setdefault("socket_connect_timeout", timeout_ms / 1000.0)
|
||||
storage_options.setdefault("socket_keepalive", True)
|
||||
storage_options.setdefault("max_connections", keepalive_pool)
|
||||
|
||||
if sentinels and sentinel_master:
|
||||
# redis sentinel URI: redis+sentinel://[user:pass@]h1:26379,h2:26379/master
|
||||
auth = ""
|
||||
if username or password:
|
||||
auth = f"{username}:{password}@"
|
||||
# ensure ports on sentinels (default 26379)
|
||||
parts = []
|
||||
for item in sentinels.split():
|
||||
host, _, port = item.partition(":")
|
||||
parts.append(f"{host}:{port or '26379'}")
|
||||
hostlist = ",".join(parts)
|
||||
storage = f"redis+sentinel://{auth}{hostlist}/{sentinel_master}"
|
||||
# pass SSL and auth to sentinel via options
|
||||
storage_options.setdefault("ssl", redis_ssl)
|
||||
sent_user = str(_env_or_cfg("REDIS_SENTINEL_USERNAME", "") or "").strip()
|
||||
sent_pass = str(_env_or_cfg("REDIS_SENTINEL_PASSWORD", "") or "").strip()
|
||||
if sent_user or sent_pass:
|
||||
storage_options.setdefault("sentinel_kwargs", {"username": sent_user, "password": sent_pass})
|
||||
else:
|
||||
# Direct redis connection
|
||||
host = str(_env_or_cfg("REDIS_HOST", "") or "").strip()
|
||||
port = str(_env_or_cfg("REDIS_PORT", "6379") or "6379").strip()
|
||||
db = str(_env_or_cfg("REDIS_DATABASE", "0") or "0").strip()
|
||||
scheme = "rediss" if redis_ssl else "redis"
|
||||
auth = ""
|
||||
if username or password:
|
||||
auth = f"{username}:{password}@"
|
||||
if host:
|
||||
storage = f"{scheme}://{auth}{host}:{port}/{db}"
|
||||
|
||||
# Final fallback
|
||||
if not storage:
|
||||
storage = "memory://"
|
||||
|
||||
# Parse lists from env
|
||||
_default_limits = _parse_limits_list(api_config.API_RATE_LIMIT_DEFAULTS)
|
||||
_application_limits = _parse_limits_list(api_config.API_RATE_LIMIT_APPLICATION_LIMITS)
|
||||
_exempt_networks = _parse_exempt_ips(api_config.API_RATE_LIMIT_EXEMPT_IPS)
|
||||
|
||||
# Normalize strategy names to SlowAPI/limits canonical values
|
||||
orig_strategy = (api_config.API_RATE_LIMIT_STRATEGY or "fixed-window").strip().lower()
|
||||
strategy = orig_strategy
|
||||
if strategy in ("fixed", "fixed-window", "fixed_window"):
|
||||
strategy = "fixed-window"
|
||||
elif strategy in ("moving", "moving-window", "moving_window"):
|
||||
strategy = "moving-window"
|
||||
elif strategy in ("sliding", "sliding-window", "sliding_window", "sliding-window-counter", "sliding_window_counter"):
|
||||
strategy = "sliding-window-counter"
|
||||
else:
|
||||
# Fallback to fixed-window if unknown
|
||||
strategy = "fixed-window"
|
||||
LOGGER.warning(f"Unknown API rate limit strategy '{orig_strategy}'; falling back to '{strategy}'")
|
||||
|
||||
_limiter = Limiter(
|
||||
key_func=_build_key_func(),
|
||||
default_limits=_default_limits or [_limit_string(api_config.rate_limit_times, api_config.rate_limit_seconds)], # type: ignore[arg-type]
|
||||
application_limits=_application_limits, # type: ignore[arg-type]
|
||||
storage_uri=storage,
|
||||
storage_options=storage_options,
|
||||
strategy=strategy,
|
||||
headers_enabled=api_config.rate_limit_headers_enabled,
|
||||
key_prefix="bwapi-rl-",
|
||||
)
|
||||
app.state.limiter = _limiter
|
||||
|
||||
# Use slowapi's default handler to include useful headers
|
||||
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)
|
||||
_enabled = True
|
||||
LOGGER.info(
|
||||
f"Rate limiting enabled with storage={storage}; strategy={api_config.API_RATE_LIMIT_STRATEGY}; headers={api_config.rate_limit_headers_enabled}; app_limits={len(_application_limits)}; defaults={len(_default_limits) if _default_limits else 1}; rules={len(_rules)}"
|
||||
)
|
||||
1
src/api/app/routers/__init__.py
Normal file
1
src/api/app/routers/__init__.py
Normal file
|
|
@ -0,0 +1 @@
|
|||
"""Routers package."""
|
||||
151
src/api/app/routers/auth.py
Normal file
151
src/api/app/routers/auth.py
Normal file
|
|
@ -0,0 +1,151 @@
|
|||
from contextlib import suppress
|
||||
from datetime import datetime, timezone
|
||||
from typing import Optional, Tuple
|
||||
|
||||
from fastapi import APIRouter, HTTPException, Request, Depends
|
||||
from fastapi.security import HTTPBasic, HTTPBasicCredentials
|
||||
from fastapi.responses import JSONResponse
|
||||
from biscuit_auth import BiscuitBuilder, PrivateKey
|
||||
|
||||
from common_utils import get_version # type: ignore
|
||||
from ..utils import LOGGER
|
||||
|
||||
from ..utils import BISCUIT_PRIVATE_KEY_FILE, check_password, get_api_db
|
||||
from ..config import api_config
|
||||
from ..auth.common import get_auth_header
|
||||
|
||||
|
||||
router = APIRouter(prefix="/auth", tags=["auth"])
|
||||
security = HTTPBasic(auto_error=False)
|
||||
# Use shared logger instance from utils
|
||||
|
||||
|
||||
@router.post("")
|
||||
async def login(request: Request, credentials: HTTPBasicCredentials | None = Depends(security)) -> JSONResponse:
|
||||
"""Authenticate and return a Biscuit token.
|
||||
|
||||
Accepted credential sources (in order):
|
||||
- Authorization: Basic <base64(username:password)>
|
||||
- Authorization: Bearer <API_TOKEN> (admin override)
|
||||
- Form body: username=<u>&password=<p>
|
||||
- JSON body: {"username": "...", "password": "..."}
|
||||
"""
|
||||
|
||||
async def _from_form() -> Optional[Tuple[str, str]]:
|
||||
with suppress(Exception):
|
||||
form = await request.form()
|
||||
u = form.get("username")
|
||||
p = form.get("password")
|
||||
if u and p:
|
||||
return str(u), str(p)
|
||||
return None
|
||||
|
||||
async def _from_json() -> Optional[Tuple[str, str]]:
|
||||
with suppress(Exception):
|
||||
data = await request.json()
|
||||
if isinstance(data, dict):
|
||||
u = data.get("username")
|
||||
p = data.get("password")
|
||||
if u and p:
|
||||
return str(u), str(p)
|
||||
return None
|
||||
|
||||
authz = get_auth_header(request)
|
||||
creds = None
|
||||
if credentials is not None:
|
||||
# Use FastAPI's HTTPBasic to get username/password
|
||||
creds = (credentials.username or "", credentials.password or "")
|
||||
is_admin_override = False
|
||||
if not creds and authz.lower().startswith("bearer ") and api_config.API_TOKEN:
|
||||
token_val = authz.split(" ", 1)[1].strip()
|
||||
if token_val and token_val == api_config.API_TOKEN:
|
||||
is_admin_override = True
|
||||
|
||||
if not creds and not is_admin_override:
|
||||
creds = await _from_form() or await _from_json()
|
||||
if not creds and not is_admin_override:
|
||||
raise HTTPException(status_code=401, detail="Missing or invalid credentials")
|
||||
|
||||
username: Optional[str] = None
|
||||
password: Optional[str] = None
|
||||
if creds:
|
||||
username, password = creds
|
||||
|
||||
db = get_api_db(log=False)
|
||||
if is_admin_override:
|
||||
user = db.get_api_user(as_dict=True) or {"username": "admin", "admin": True}
|
||||
is_admin = True
|
||||
else:
|
||||
user = db.get_api_user(username=username or "", as_dict=True)
|
||||
if not user:
|
||||
raise HTTPException(status_code=401, detail="Invalid credentials")
|
||||
stored_hash = user.get("password") or b""
|
||||
if not isinstance(stored_hash, (bytes, bytearray)):
|
||||
stored_hash = str(stored_hash or "").encode("utf-8")
|
||||
if not password or not check_password(password, stored_hash):
|
||||
raise HTTPException(status_code=401, detail="Invalid credentials")
|
||||
is_admin = bool(user.get("admin"))
|
||||
|
||||
perms: set[str] = set()
|
||||
is_admin = bool(user.get("admin"))
|
||||
fine_grained: list[tuple[str, str, str]] = []
|
||||
if is_admin:
|
||||
perms.update(["read", "write"])
|
||||
else:
|
||||
try:
|
||||
rows = db.get_api_permissions(username=username) # type: ignore
|
||||
except Exception:
|
||||
rows = []
|
||||
for row in rows or []:
|
||||
name = getattr(row, "permission", "") or ""
|
||||
lname = name.lower()
|
||||
if "read" in lname:
|
||||
perms.add("read")
|
||||
if any(x in lname for x in ("create", "update", "delete", "execute", "run", "convert", "export")):
|
||||
perms.add("write")
|
||||
rtype = getattr(row, "resource_type", "") or ""
|
||||
rid = getattr(row, "resource_id", None) or "*"
|
||||
fine_grained.append((str(rtype), str(rid), str(name)))
|
||||
|
||||
try:
|
||||
priv_hex = BISCUIT_PRIVATE_KEY_FILE.read_text(encoding="utf-8").strip()
|
||||
if not priv_hex:
|
||||
raise RuntimeError("Biscuit private key not found")
|
||||
private_key = PrivateKey.from_hex(priv_hex)
|
||||
except Exception as e:
|
||||
LOGGER.error(f"/login: failed to load Biscuit private key: {e}")
|
||||
raise HTTPException(status_code=500, detail="Authentication service unavailable")
|
||||
|
||||
client_ip = request.client.host if request.client else "0.0.0.0"
|
||||
host = request.headers.get("host", "bwapi")
|
||||
builder = BiscuitBuilder(
|
||||
f"""
|
||||
user("{(user.get('username') if isinstance(user, dict) else username) or 'user'}");
|
||||
time({datetime.now(timezone.utc).isoformat()});
|
||||
client_ip("{client_ip}");
|
||||
domain("{host}");
|
||||
version("{get_version()}");
|
||||
"""
|
||||
)
|
||||
|
||||
# API has no role logic; encode read/write under a fixed role name.
|
||||
role_name = "api_user"
|
||||
if "read" in perms and "write" in perms:
|
||||
builder.add_code(f'role("{role_name}", ["read", "write"]);')
|
||||
elif "read" in perms:
|
||||
builder.add_code(f'role("{role_name}", ["read"]);')
|
||||
else:
|
||||
raise HTTPException(status_code=403, detail="No permissions assigned to user")
|
||||
|
||||
# Embed fine-grained permissions as facts
|
||||
def _esc(val: str) -> str:
|
||||
return val.replace("\\", "\\\\").replace('"', '\\"')
|
||||
|
||||
if is_admin:
|
||||
builder.add_code("admin(true);")
|
||||
else:
|
||||
for rtype, rid, pname in fine_grained:
|
||||
builder.add_code(f'api_perm("{_esc(rtype)}", "{_esc(rid)}", "{_esc(pname)}");')
|
||||
|
||||
token = builder.build(private_key)
|
||||
return JSONResponse(status_code=200, content={"token": token.to_base64()})
|
||||
86
src/api/app/routers/bans.py
Normal file
86
src/api/app/routers/bans.py
Normal file
|
|
@ -0,0 +1,86 @@
|
|||
from fastapi import APIRouter, Depends
|
||||
from fastapi.responses import JSONResponse
|
||||
from typing import List, Union
|
||||
import json
|
||||
|
||||
from ..auth.guard import guard
|
||||
from ..deps import get_instances_api_caller
|
||||
from ..schemas import BanRequest, UnbanRequest
|
||||
|
||||
|
||||
router = APIRouter(prefix="/bans", tags=["bans"])
|
||||
|
||||
|
||||
@router.get("", dependencies=[Depends(guard)])
|
||||
def list_bans(api_caller=Depends(get_instances_api_caller)) -> JSONResponse:
|
||||
"""List all active bans across all BunkerWeb instances."""
|
||||
ok, responses = api_caller.send_to_apis("GET", "/bans", response=True)
|
||||
return JSONResponse(status_code=200 if ok else 502, content=responses or {"status": "error", "msg": "internal error"})
|
||||
|
||||
|
||||
@router.post("/ban", dependencies=[Depends(guard)])
|
||||
@router.post("", dependencies=[Depends(guard)])
|
||||
def ban(req: Union[List[BanRequest], BanRequest, str], api_caller=Depends(get_instances_api_caller)) -> JSONResponse:
|
||||
"""Ban one or multiple IP addresses across all BunkerWeb instances.
|
||||
|
||||
Args:
|
||||
req: Ban request(s) containing IP, expiration, reason, and optional service
|
||||
"""
|
||||
# Support body as JSON object, list, or stringified JSON
|
||||
if isinstance(req, str):
|
||||
try:
|
||||
loaded = json.loads(req)
|
||||
if isinstance(loaded, list):
|
||||
items: List[BanRequest] = [BanRequest(**it) for it in loaded]
|
||||
elif isinstance(loaded, dict):
|
||||
items = [BanRequest(**loaded)]
|
||||
else:
|
||||
return JSONResponse(status_code=422, content={"status": "error", "message": "Invalid request body"})
|
||||
except Exception:
|
||||
return JSONResponse(status_code=422, content={"status": "error", "message": "Invalid request body"})
|
||||
else:
|
||||
items = req if isinstance(req, list) else [req]
|
||||
|
||||
all_ok = True
|
||||
for it in items:
|
||||
payload = it.model_dump()
|
||||
# Derive ban_scope from service presence: if no service, scope is global
|
||||
service = (payload.get("service") or "").strip() if isinstance(payload.get("service"), str) else payload.get("service")
|
||||
if service:
|
||||
payload["ban_scope"] = "service"
|
||||
else:
|
||||
payload["ban_scope"] = "global"
|
||||
# Remove empty service to avoid ambiguity downstream
|
||||
payload.pop("service", None)
|
||||
ok, _ = api_caller.send_to_apis("POST", "/ban", data=payload)
|
||||
all_ok = all_ok and ok
|
||||
return JSONResponse(status_code=200 if all_ok else 502, content={"status": "success" if all_ok else "error"})
|
||||
|
||||
|
||||
@router.post("/unban", dependencies=[Depends(guard)])
|
||||
@router.delete("", dependencies=[Depends(guard)])
|
||||
def unban(req: Union[List[UnbanRequest], UnbanRequest, str], api_caller=Depends(get_instances_api_caller)) -> JSONResponse:
|
||||
"""Remove one or multiple bans across all BunkerWeb instances.
|
||||
|
||||
Args:
|
||||
req: Unban request(s) containing IP and optional service
|
||||
"""
|
||||
if isinstance(req, str):
|
||||
try:
|
||||
loaded = json.loads(req)
|
||||
if isinstance(loaded, list):
|
||||
items: List[UnbanRequest] = [UnbanRequest(**it) for it in loaded]
|
||||
elif isinstance(loaded, dict):
|
||||
items = [UnbanRequest(**loaded)]
|
||||
else:
|
||||
return JSONResponse(status_code=422, content={"status": "error", "message": "Invalid request body"})
|
||||
except Exception:
|
||||
return JSONResponse(status_code=422, content={"status": "error", "message": "Invalid request body"})
|
||||
else:
|
||||
items = req if isinstance(req, list) else [req]
|
||||
|
||||
all_ok = True
|
||||
for it in items:
|
||||
ok, _ = api_caller.send_to_apis("POST", "/unban", data=it.model_dump())
|
||||
all_ok = all_ok and ok
|
||||
return JSONResponse(status_code=200 if all_ok else 502, content={"status": "success" if all_ok else "error"})
|
||||
181
src/api/app/routers/cache.py
Normal file
181
src/api/app/routers/cache.py
Normal file
|
|
@ -0,0 +1,181 @@
|
|||
from contextlib import suppress
|
||||
from datetime import datetime
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from fastapi import APIRouter, Depends, Query
|
||||
from fastapi.responses import JSONResponse, Response
|
||||
from ..schemas import CacheFilesDeleteRequest, CacheFileKey
|
||||
|
||||
from ..auth.guard import guard
|
||||
from ..utils import get_db
|
||||
|
||||
|
||||
router = APIRouter(prefix="/cache", tags=["cache"])
|
||||
|
||||
|
||||
def _normalize_service(value: Optional[str]) -> Optional[str]:
|
||||
if value in (None, "", "global"):
|
||||
return None
|
||||
return str(value)
|
||||
|
||||
|
||||
def _decode_printable(b: Optional[bytes]) -> tuple[str, bool]:
|
||||
if not isinstance(b, (bytes, bytearray)):
|
||||
return "", False
|
||||
try:
|
||||
s = b.decode("utf-8")
|
||||
except Exception:
|
||||
return "Download file to view content", False
|
||||
# Keep simple heuristic: printable if utf-8 decode succeeded and no control chars beyond common whitespace
|
||||
if any(ord(ch) < 9 or (13 < ord(ch) < 32) for ch in s):
|
||||
return "Download file to view content", False
|
||||
return s, True
|
||||
|
||||
|
||||
@router.get("", dependencies=[Depends(guard)])
|
||||
def list_cache(
|
||||
service: Optional[str] = None,
|
||||
plugin: Optional[str] = None,
|
||||
job_name: Optional[str] = None,
|
||||
with_data: bool = Query(False, description="Include file data inline (text only)"),
|
||||
) -> JSONResponse:
|
||||
"""List cache files from job executions.
|
||||
|
||||
Args:
|
||||
service: Filter by service ID
|
||||
plugin: Filter by plugin ID
|
||||
job_name: Filter by job name
|
||||
with_data: Include file content (text files only)
|
||||
"""
|
||||
items = get_db().get_jobs_cache_files(with_data=with_data, job_name=job_name or "", plugin_id=plugin or "")
|
||||
out: List[Dict[str, Any]] = []
|
||||
for it in items:
|
||||
if service not in (None, "", "global") and it.get("service_id") != service:
|
||||
continue
|
||||
data = {
|
||||
"plugin": it.get("plugin_id"),
|
||||
"job_name": it.get("job_name"),
|
||||
"service": it.get("service_id") or "global",
|
||||
"file_name": it.get("file_name"),
|
||||
"last_update": it.get("last_update").astimezone().isoformat() if it.get("last_update") else None,
|
||||
"checksum": it.get("checksum"),
|
||||
}
|
||||
if with_data:
|
||||
text, printable = _decode_printable(it.get("data"))
|
||||
data["data"] = text
|
||||
data["printable"] = printable
|
||||
out.append(data)
|
||||
return JSONResponse(status_code=200, content={"status": "success", "cache": out})
|
||||
|
||||
|
||||
def _transform_filename(path_token: str) -> str:
|
||||
# UI uses a special encoding for folders: prefix "folder:" and replace '_' with '/'
|
||||
if path_token.startswith("folder:"):
|
||||
return path_token.replace("_", "/")[len("folder:") :] # noqa: E203
|
||||
return path_token
|
||||
|
||||
|
||||
@router.get(
|
||||
"/{service}/{plugin_id}/{job_name}/{file_name}",
|
||||
dependencies=[Depends(guard)],
|
||||
response_model=None,
|
||||
)
|
||||
def fetch_cache_file(
|
||||
service: str,
|
||||
plugin_id: str,
|
||||
job_name: str,
|
||||
file_name: str,
|
||||
download: bool = False,
|
||||
) -> Response:
|
||||
"""Fetch content of a specific cache file.
|
||||
|
||||
Args:
|
||||
service: Service ID
|
||||
plugin_id: Plugin ID
|
||||
job_name: Job name
|
||||
file_name: File name
|
||||
download: Return as downloadable attachment
|
||||
"""
|
||||
db = get_db()
|
||||
fname = _transform_filename(file_name)
|
||||
data = db.get_job_cache_file(job_name, fname, service_id=_normalize_service(service) or "", plugin_id=plugin_id, with_info=True, with_data=True)
|
||||
if not data:
|
||||
return JSONResponse(status_code=404, content={"status": "error", "message": "Cache file not found"})
|
||||
if download:
|
||||
content = data.get("data") if isinstance(data, dict) else data
|
||||
if not isinstance(content, (bytes, bytearray)):
|
||||
content = b""
|
||||
headers = {"Content-Disposition": f"attachment; filename={fname}"}
|
||||
return Response(status_code=200, content=content, media_type="application/octet-stream", headers=headers)
|
||||
# Return printable content only
|
||||
content = data.get("data") if isinstance(data, dict) else data
|
||||
text, printable = _decode_printable(content)
|
||||
return JSONResponse(
|
||||
status_code=200,
|
||||
content={
|
||||
"status": "success",
|
||||
"file": {
|
||||
"plugin": plugin_id,
|
||||
"job_name": job_name,
|
||||
"service": service or "global",
|
||||
"file_name": fname,
|
||||
"last_update": (datetime.fromtimestamp(data.get("last_update")).astimezone().isoformat() if isinstance(data, dict) and data.get("last_update") else None), # type: ignore
|
||||
"checksum": (data.get("checksum") if isinstance(data, dict) else None),
|
||||
"data": text,
|
||||
"printable": printable,
|
||||
},
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@router.delete("", dependencies=[Depends(guard)])
|
||||
def delete_cache_files(payload: CacheFilesDeleteRequest) -> JSONResponse:
|
||||
"""Delete multiple cache files.
|
||||
|
||||
Args:
|
||||
payload: Request containing list of cache files to delete
|
||||
"""
|
||||
items = payload.cache_files
|
||||
|
||||
db = get_db()
|
||||
deleted = 0
|
||||
errors: List[str] = []
|
||||
changed_plugins: set[str] = set()
|
||||
for it in items:
|
||||
fname = _transform_filename(it.fileName)
|
||||
job = it.jobName
|
||||
svc = _normalize_service(it.service)
|
||||
plug = it.plugin
|
||||
if not fname or not job:
|
||||
continue
|
||||
err = db.delete_job_cache(fname, job_name=job, service_id=svc)
|
||||
if err:
|
||||
errors.append(f"{fname}: {err}")
|
||||
else:
|
||||
changed_plugins.add(plug)
|
||||
deleted += 1
|
||||
|
||||
# Notify scheduler to apply changes for affected plugins
|
||||
with suppress(Exception):
|
||||
db.checked_changes(changes=["config"], plugins_changes=list(changed_plugins), value=True)
|
||||
|
||||
status_code = 207 if errors and deleted else (400 if errors and not deleted else 200)
|
||||
body: Dict[str, Any] = {"status": "success" if deleted and not errors else ("partial" if deleted else "error")}
|
||||
body["deleted"] = deleted
|
||||
if errors:
|
||||
body["errors"] = errors
|
||||
return JSONResponse(status_code=status_code, content=body)
|
||||
|
||||
|
||||
@router.delete("/{service}/{plugin_id}/{job_name}/{file_name}", dependencies=[Depends(guard)])
|
||||
def delete_cache_file(service: str, plugin_id: str, job_name: str, file_name: str) -> JSONResponse:
|
||||
"""Delete a specific cache file.
|
||||
|
||||
Args:
|
||||
service: Service ID
|
||||
plugin_id: Plugin ID
|
||||
job_name: Job name
|
||||
file_name: File name
|
||||
"""
|
||||
req = CacheFilesDeleteRequest(cache_files=[CacheFileKey(service=service, plugin=plugin_id, jobName=job_name, fileName=file_name)])
|
||||
return delete_cache_files(req)
|
||||
397
src/api/app/routers/configs.py
Normal file
397
src/api/app/routers/configs.py
Normal file
|
|
@ -0,0 +1,397 @@
|
|||
from contextlib import suppress
|
||||
from typing import Any, Dict, List, Optional
|
||||
from re import compile as re_compile, sub as re_sub
|
||||
from pathlib import Path
|
||||
|
||||
from fastapi import APIRouter, Depends, UploadFile, File, Form
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
from ..auth.guard import guard
|
||||
from ..utils import get_db
|
||||
from ..schemas import ConfigCreateRequest, ConfigUpdateRequest, ConfigsDeleteRequest, ConfigKey
|
||||
|
||||
|
||||
router = APIRouter(prefix="/configs", tags=["configs"])
|
||||
|
||||
_NAME_RX = re_compile(r"^[\w_-]{1,64}$")
|
||||
|
||||
# Accepted config types (normalized form).
|
||||
_CONFIG_TYPES = {
|
||||
# HTTP-level
|
||||
"http",
|
||||
"server_http",
|
||||
"default_server_http",
|
||||
# ModSecurity
|
||||
"modsec_crs",
|
||||
"modsec",
|
||||
# Stream
|
||||
"stream",
|
||||
"server_stream",
|
||||
# CRS plugins
|
||||
"crs_plugins_before",
|
||||
"crs_plugins_after",
|
||||
}
|
||||
|
||||
|
||||
def _normalize_type(t: str) -> str:
|
||||
return t.strip().replace("-", "_").lower()
|
||||
|
||||
|
||||
def _validate_name(name: str) -> Optional[str]:
|
||||
if not name or not _NAME_RX.match(name):
|
||||
return "Invalid name: must match ^[\\w_-]{1,64}$"
|
||||
return None
|
||||
|
||||
|
||||
def _sanitize_name_from_filename(filename: str) -> str:
|
||||
base = Path(filename).stem
|
||||
# Replace invalid chars with underscore and collapse repeats
|
||||
cleaned = re_sub(r"[^\w-]+", "_", base).strip("_-")
|
||||
if len(cleaned) > 64:
|
||||
cleaned = cleaned[:64]
|
||||
return cleaned
|
||||
|
||||
|
||||
def _service_exists(service: Optional[str]) -> bool:
|
||||
if not service:
|
||||
return True
|
||||
db = get_db()
|
||||
try:
|
||||
conf = db.get_config(global_only=True, methods=False, with_drafts=True)
|
||||
return service in (conf.get("SERVER_NAME", "") or "").split(" ")
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
|
||||
def _decode_data(val: bytes | str | None) -> str:
|
||||
if val is None:
|
||||
return ""
|
||||
if isinstance(val, bytes):
|
||||
with suppress(Exception):
|
||||
return val.decode("utf-8")
|
||||
return val.decode("utf-8", errors="replace")
|
||||
return str(val)
|
||||
|
||||
|
||||
@router.get("", dependencies=[Depends(guard)])
|
||||
def list_configs(service: Optional[str] = None, type: Optional[str] = None, with_drafts: bool = True, with_data: bool = False) -> JSONResponse: # noqa: A002
|
||||
"""List custom configs.
|
||||
|
||||
Query params:
|
||||
- service: service id, or "global"/empty for global configs
|
||||
- type: optional filter (e.g., http, server_http, modsec, ...)
|
||||
- with_drafts: include draft services when computing templates
|
||||
- with_data: include the content of configs
|
||||
"""
|
||||
db = get_db()
|
||||
s_filter = None if (service in (None, "", "global")) else service
|
||||
t_filter = _normalize_type(type) if type else None # type: ignore[arg-type]
|
||||
items = db.get_custom_configs(with_drafts=with_drafts, with_data=with_data)
|
||||
|
||||
out: List[Dict[str, Any]] = []
|
||||
for it in items:
|
||||
if s_filter is not None and it.get("service_id") != s_filter:
|
||||
continue
|
||||
if t_filter is not None and it.get("type") != t_filter:
|
||||
continue
|
||||
data = {k: v for k, v in it.items() if k != "data"}
|
||||
if with_data:
|
||||
data["data"] = _decode_data(it.get("data"))
|
||||
# Normalize global service presentation
|
||||
data["service"] = data.pop("service_id", None) or "global"
|
||||
out.append(data)
|
||||
|
||||
return JSONResponse(status_code=200, content={"status": "success", "configs": out})
|
||||
|
||||
|
||||
@router.post("/upload", dependencies=[Depends(guard)])
|
||||
async def upload_configs(
|
||||
files: List[UploadFile] = File(..., description="One or more config files to create"),
|
||||
service: Optional[str] = Form(None, description='Service id; use "global" or leave empty for global'),
|
||||
type: str = Form(..., description="Config type, e.g., http, server_http, modsec, ..."), # noqa: A002
|
||||
) -> JSONResponse:
|
||||
"""Create new custom configs from uploaded files (method="api").
|
||||
|
||||
The config name is derived from each file's basename (without extension),
|
||||
sanitized to `^[\\w_-]{1,64}$`.
|
||||
|
||||
Args:
|
||||
files: Config files to upload
|
||||
service: Service ID or "global"
|
||||
type: Config type
|
||||
"""
|
||||
s_id = None if service in (None, "", "global") else service
|
||||
ctype = _normalize_type(type)
|
||||
if ctype not in _CONFIG_TYPES:
|
||||
return JSONResponse(status_code=422, content={"status": "error", "message": "Invalid type"})
|
||||
if not _service_exists(s_id):
|
||||
return JSONResponse(status_code=404, content={"status": "error", "message": "Service not found"})
|
||||
|
||||
db = get_db()
|
||||
created: List[str] = []
|
||||
errors: List[Dict[str, str]] = []
|
||||
|
||||
for f in files:
|
||||
try:
|
||||
filename = f.filename or ""
|
||||
name = _sanitize_name_from_filename(filename)
|
||||
err = _validate_name(name)
|
||||
if err:
|
||||
errors.append({"file": filename or name, "error": err})
|
||||
continue
|
||||
content_bytes = await f.read()
|
||||
# Decode as UTF-8 text; replace undecodable bytes
|
||||
config_content = ""
|
||||
with suppress(Exception):
|
||||
config_content = content_bytes.decode("utf-8")
|
||||
if not isinstance(content_bytes, (bytes, bytearray)):
|
||||
# Should not happen, but guard anyway
|
||||
config_content = str(content_bytes)
|
||||
else:
|
||||
try:
|
||||
config_content = content_bytes.decode("utf-8", errors="replace")
|
||||
except Exception:
|
||||
config_content = ""
|
||||
|
||||
error = db.upsert_custom_config(
|
||||
ctype,
|
||||
name,
|
||||
{"service_id": s_id, "type": ctype, "name": name, "data": config_content, "method": "api"},
|
||||
service_id=s_id,
|
||||
new=True,
|
||||
)
|
||||
if error:
|
||||
errors.append({"file": filename or name, "error": error})
|
||||
else:
|
||||
created.append(f"{(s_id or 'global')}/{ctype}/{name}")
|
||||
except Exception as e:
|
||||
errors.append({"file": f.filename or "(unknown)", "error": str(e)})
|
||||
|
||||
status_code = 207 if errors and created else (400 if errors and not created else 201)
|
||||
content: Dict[str, Any] = {"status": "success" if created and not errors else ("partial" if created else "error")}
|
||||
if created:
|
||||
content["created"] = created
|
||||
if errors:
|
||||
content["errors"] = errors
|
||||
return JSONResponse(status_code=status_code, content=content)
|
||||
|
||||
|
||||
@router.get("/{service}/{config_type}/{name}", dependencies=[Depends(guard)])
|
||||
def get_config(service: str, config_type: str, name: str, with_data: bool = True) -> JSONResponse:
|
||||
"""Get a specific custom config.
|
||||
|
||||
Args:
|
||||
service: Service ID or "global"
|
||||
config_type: Config type
|
||||
name: Config name
|
||||
with_data: Include config content
|
||||
"""
|
||||
db = get_db()
|
||||
s_id = None if service in (None, "", "global") else service
|
||||
ctype = _normalize_type(config_type)
|
||||
item = db.get_custom_config(ctype, name, service_id=s_id, with_data=with_data)
|
||||
if not item:
|
||||
return JSONResponse(status_code=404, content={"status": "error", "message": "Config not found"})
|
||||
data = {k: v for k, v in item.items() if k != "data"}
|
||||
if with_data:
|
||||
data["data"] = _decode_data(item.get("data"))
|
||||
data["service"] = data.pop("service_id", None) or "global"
|
||||
return JSONResponse(status_code=200, content={"status": "success", "config": data})
|
||||
|
||||
|
||||
@router.post("", dependencies=[Depends(guard)])
|
||||
def create_config(req: ConfigCreateRequest) -> JSONResponse:
|
||||
"""Create a new custom config (method="api").
|
||||
|
||||
Body:
|
||||
- service: optional service id (use "global" or omit for global)
|
||||
- type: config type (e.g., http, server_http, modsec, ...)
|
||||
- name: config name (^[\\w_-]{1,64}$)
|
||||
- data: content as UTF-8 string
|
||||
"""
|
||||
service = req.service
|
||||
ctype = req.type
|
||||
name = req.name
|
||||
data = req.data
|
||||
if not _service_exists(service):
|
||||
return JSONResponse(status_code=404, content={"status": "error", "message": "Service not found"})
|
||||
|
||||
error = get_db().upsert_custom_config(
|
||||
ctype,
|
||||
name,
|
||||
{"service_id": service, "type": ctype, "name": name, "data": data, "method": "api"},
|
||||
service_id=service,
|
||||
new=True,
|
||||
)
|
||||
if error:
|
||||
code = 400 if ("already exists" in error or "read-only" in error) else 500
|
||||
return JSONResponse(status_code=code, content={"status": "error", "message": error})
|
||||
return JSONResponse(status_code=201, content={"status": "success"})
|
||||
|
||||
|
||||
@router.patch("/{service}/{config_type}/{name}", dependencies=[Depends(guard)])
|
||||
def update_config(service: str, config_type: str, name: str, req: ConfigUpdateRequest) -> JSONResponse:
|
||||
"""Update or move a custom config. Only configs managed by method "api" or template-derived ones can be edited via API."""
|
||||
s_orig = None if service in (None, "", "global") else service
|
||||
ctype_orig = _normalize_type(config_type)
|
||||
name_orig = name
|
||||
|
||||
db = get_db()
|
||||
current = db.get_custom_config(ctype_orig, name_orig, service_id=s_orig, with_data=True)
|
||||
if not current:
|
||||
return JSONResponse(status_code=404, content={"status": "error", "message": "Config not found"})
|
||||
# Enforce ownership similar to UI: allow editing only if current method is "api" or the item is template-derived
|
||||
if not current.get("template") and current.get("method") != "api":
|
||||
return JSONResponse(status_code=403, content={"status": "error", "message": "Config is not API-managed and cannot be edited"})
|
||||
|
||||
# New values (optional)
|
||||
service_new = req.service
|
||||
type_new = req.type if req.type is not None else current.get("type")
|
||||
name_new = req.name if req.name is not None else current.get("name")
|
||||
data_new = req.data if req.data is not None else _decode_data(current.get("data"))
|
||||
if not _service_exists(service_new):
|
||||
return JSONResponse(status_code=404, content={"status": "error", "message": "Service not found"})
|
||||
|
||||
if (
|
||||
current.get("type") == type_new
|
||||
and current.get("name") == name_new
|
||||
and current.get("service_id") == service_new
|
||||
and _decode_data(current.get("data")) == data_new
|
||||
):
|
||||
return JSONResponse(status_code=400, content={"status": "error", "message": "No values were changed"})
|
||||
|
||||
error = db.upsert_custom_config(
|
||||
ctype_orig,
|
||||
name_orig,
|
||||
{"service_id": service_new, "type": type_new, "name": name_new, "data": data_new, "method": "api"},
|
||||
service_id=s_orig,
|
||||
)
|
||||
if error:
|
||||
code = 400 if ("read-only" in error or "already exists" in error or "does not exist" in error) else 500
|
||||
return JSONResponse(status_code=code, content={"status": "error", "message": error})
|
||||
return JSONResponse(status_code=200, content={"status": "success"})
|
||||
|
||||
|
||||
@router.patch("/{service}/{config_type}/{name}/upload", dependencies=[Depends(guard)])
|
||||
async def update_config_upload(
|
||||
service: str,
|
||||
config_type: str,
|
||||
name: str,
|
||||
file: UploadFile = File(...),
|
||||
new_service: Optional[str] = Form(None),
|
||||
new_type: Optional[str] = Form(None),
|
||||
new_name: Optional[str] = Form(None),
|
||||
) -> JSONResponse:
|
||||
"""Update an existing custom config using an uploaded file.
|
||||
|
||||
Optional form fields `new_service`, `new_type`, `new_name` allow moving/renaming.
|
||||
"""
|
||||
s_orig = None if service in (None, "", "global") else service
|
||||
ctype_orig = _normalize_type(config_type)
|
||||
name_orig = name
|
||||
|
||||
db = get_db()
|
||||
current = db.get_custom_config(ctype_orig, name_orig, service_id=s_orig, with_data=True)
|
||||
if not current:
|
||||
return JSONResponse(status_code=404, content={"status": "error", "message": "Config not found"})
|
||||
if not current.get("template") and current.get("method") != "api":
|
||||
return JSONResponse(status_code=403, content={"status": "error", "message": "Config is not API-managed and cannot be edited"})
|
||||
|
||||
s_new = None if new_service in (None, "", "global") else new_service
|
||||
t_new = _normalize_type(new_type) if new_type else current.get("type")
|
||||
n_new = new_name.strip() if isinstance(new_name, str) and new_name else current.get("name")
|
||||
if n_new == current.get("name") and not new_name:
|
||||
# If no explicit new_name, derive name from uploaded file if different
|
||||
filename = file.filename or ""
|
||||
derived = _sanitize_name_from_filename(filename)
|
||||
if derived and derived != n_new:
|
||||
n_new = derived
|
||||
|
||||
if t_new not in _CONFIG_TYPES:
|
||||
return JSONResponse(status_code=422, content={"status": "error", "message": "Invalid type"})
|
||||
err = _validate_name(n_new)
|
||||
if err:
|
||||
return JSONResponse(status_code=422, content={"status": "error", "message": err})
|
||||
if not _service_exists(s_new):
|
||||
return JSONResponse(status_code=404, content={"status": "error", "message": "Service not found"})
|
||||
|
||||
content_bytes = await file.read()
|
||||
try:
|
||||
content = content_bytes.decode("utf-8", errors="replace")
|
||||
except Exception:
|
||||
content = ""
|
||||
|
||||
if current.get("type") == t_new and current.get("name") == n_new and current.get("service_id") == s_new and _decode_data(current.get("data")) == content:
|
||||
return JSONResponse(status_code=400, content={"status": "error", "message": "No values were changed"})
|
||||
|
||||
error = db.upsert_custom_config(
|
||||
ctype_orig,
|
||||
name_orig,
|
||||
{"service_id": s_new, "type": t_new, "name": n_new, "data": content, "method": "api"},
|
||||
service_id=s_orig,
|
||||
)
|
||||
if error:
|
||||
code = 400 if ("read-only" in error or "already exists" in error or "does not exist" in error) else 500
|
||||
return JSONResponse(status_code=code, content={"status": "error", "message": error})
|
||||
return JSONResponse(status_code=200, content={"status": "success"})
|
||||
|
||||
|
||||
@router.delete("", dependencies=[Depends(guard)])
|
||||
def delete_configs(req: ConfigsDeleteRequest) -> JSONResponse:
|
||||
"""Delete multiple API-managed custom configs.
|
||||
|
||||
Body example:
|
||||
{"configs": [{"service": "global", "type": "http", "name": "my_snippet"}, ...]}
|
||||
Only configs with method == "api" will be deleted; others are ignored.
|
||||
"""
|
||||
configs = req.configs
|
||||
|
||||
# Build a set of keys to delete
|
||||
to_del = {(it.service, it.type, it.name) for it in configs}
|
||||
|
||||
if not to_del:
|
||||
return JSONResponse(status_code=422, content={"status": "error", "message": "No valid configs to delete"})
|
||||
|
||||
db = get_db()
|
||||
# Keep only API-managed configs not in to_del
|
||||
current = db.get_custom_configs(with_drafts=True, with_data=True)
|
||||
keep: List[Dict[str, Any]] = []
|
||||
skipped: List[str] = []
|
||||
for it in current:
|
||||
key = (it.get("service_id"), it.get("type"), it.get("name"))
|
||||
if it.get("method") != "api":
|
||||
# Not API-managed: ignore deletions for these
|
||||
if key in to_del:
|
||||
skipped.append(f"{(it.get('service_id') or 'global')}/{it.get('type')}/{it.get('name')}")
|
||||
continue
|
||||
if key in to_del:
|
||||
# delete -> skip adding to keep
|
||||
continue
|
||||
# Convert to expected format for save_custom_configs
|
||||
keep.append(
|
||||
{
|
||||
"service_id": it.get("service_id") or None,
|
||||
"type": it.get("type"),
|
||||
"name": it.get("name"),
|
||||
"data": it.get("data") or b"",
|
||||
"method": "api",
|
||||
}
|
||||
)
|
||||
|
||||
err = db.save_custom_configs(keep, "api")
|
||||
if err:
|
||||
return JSONResponse(status_code=500, content={"status": "error", "message": err})
|
||||
|
||||
content: Dict[str, Any] = {"status": "success"}
|
||||
if skipped:
|
||||
content["skipped"] = skipped
|
||||
return JSONResponse(status_code=200, content=content)
|
||||
|
||||
|
||||
@router.delete("/{service}/{config_type}/{name}", dependencies=[Depends(guard)])
|
||||
def delete_config(service: str, config_type: str, name: str) -> JSONResponse:
|
||||
"""Delete a single API-managed custom config by replacing the API set without the selected item."""
|
||||
s_id = None if service in (None, "", "global") else service
|
||||
ctype = _normalize_type(config_type)
|
||||
# Reuse batch deletion logic
|
||||
return delete_configs(ConfigsDeleteRequest(configs=[ConfigKey(service=s_id or "global", type=ctype, name=name)]))
|
||||
50
src/api/app/routers/core.py
Normal file
50
src/api/app/routers/core.py
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
from fastapi import APIRouter
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
from ..utils import get_api_db
|
||||
|
||||
|
||||
router = APIRouter(tags=["core"]) # Utils-only (ping, health)
|
||||
from .auth import router as auth_router
|
||||
from .instances import router as instances_router
|
||||
from .global_config import router as global_config_router
|
||||
from .bans import router as bans_router
|
||||
from .services import router as services_router
|
||||
from .configs import router as configs_router
|
||||
from .plugins import router as plugins_router
|
||||
from .cache import router as cache_router
|
||||
from .jobs import router as jobs_router
|
||||
|
||||
|
||||
@router.get("/ping")
|
||||
def ping() -> dict:
|
||||
"""Simple ping/pong health check endpoint."""
|
||||
return {"status": "ok", "message": "pong"}
|
||||
|
||||
|
||||
@router.get("/health")
|
||||
def health() -> JSONResponse:
|
||||
"""Lightweight liveness probe for the API service itself.
|
||||
|
||||
Returns 200 when the FastAPI service is up and routing requests.
|
||||
Does not call internal BunkerWeb instances.
|
||||
"""
|
||||
return JSONResponse(status_code=200, content={"status": "ok"})
|
||||
|
||||
|
||||
# Mount category routers under core
|
||||
# Conditionally expose auth endpoints only if API users exist
|
||||
_adb = get_api_db(log=False)
|
||||
_has_api_user = _adb.has_api_user()
|
||||
|
||||
if _has_api_user:
|
||||
router.include_router(auth_router)
|
||||
|
||||
router.include_router(instances_router)
|
||||
router.include_router(bans_router)
|
||||
router.include_router(global_config_router)
|
||||
router.include_router(services_router)
|
||||
router.include_router(configs_router)
|
||||
router.include_router(plugins_router)
|
||||
router.include_router(cache_router)
|
||||
router.include_router(jobs_router)
|
||||
67
src/api/app/routers/global_config.py
Normal file
67
src/api/app/routers/global_config.py
Normal file
|
|
@ -0,0 +1,67 @@
|
|||
from typing import Dict
|
||||
|
||||
from fastapi import APIRouter, Depends
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
from ..auth.guard import guard
|
||||
from ..utils import get_db
|
||||
from ..schemas import GlobalConfigUpdate
|
||||
|
||||
|
||||
router = APIRouter(prefix="/global_config", tags=["global_config"])
|
||||
|
||||
|
||||
@router.get("", dependencies=[Depends(guard)])
|
||||
def read_global_config(full: bool = False, methods: bool = False) -> JSONResponse:
|
||||
"""Read the current global configuration settings.
|
||||
|
||||
Args:
|
||||
full: Include all settings, even those with default values
|
||||
methods: Include method metadata for each setting
|
||||
"""
|
||||
db = get_db()
|
||||
if full:
|
||||
conf = db.get_config(global_only=True, methods=methods)
|
||||
else:
|
||||
conf = db.get_non_default_settings(global_only=True, methods=methods)
|
||||
return JSONResponse(status_code=200, content={"status": "success", "config": conf})
|
||||
|
||||
|
||||
def _current_api_global_overrides() -> Dict[str, str]:
|
||||
"""Return only current global settings that are set via method 'api'.
|
||||
|
||||
Values are returned as a flat dict: {setting_id: value}.
|
||||
"""
|
||||
overrides: Dict[str, str] = {}
|
||||
conf = get_db().get_non_default_settings(global_only=True, methods=True, with_drafts=False)
|
||||
for key, meta in conf.items():
|
||||
try:
|
||||
if isinstance(meta, dict) and meta.get("method") == "api":
|
||||
overrides[key] = str(meta.get("value", ""))
|
||||
except Exception:
|
||||
# Be robust to unexpected values
|
||||
continue
|
||||
return overrides
|
||||
|
||||
|
||||
@router.patch("", dependencies=[Depends(guard)])
|
||||
def update_global_config(payload: GlobalConfigUpdate) -> JSONResponse:
|
||||
"""Update global configuration settings.
|
||||
|
||||
Args:
|
||||
payload: JSON object with setting key-value pairs to update
|
||||
"""
|
||||
# Normalize values to strings (DB expects strings for settings)
|
||||
to_set: Dict[str, str] = {}
|
||||
for k, v in payload.root.items():
|
||||
to_set[str(k)] = "" if v is None else str(v)
|
||||
|
||||
base = _current_api_global_overrides()
|
||||
base.update(to_set)
|
||||
ret = get_db().save_config(base, "api", changed=True)
|
||||
if isinstance(ret, str):
|
||||
code = 400 if ret and ("read-only" in ret or "already exists" in ret or "doesn't exist" in ret) else (200 if ret == "" else 500)
|
||||
status = "success" if code == 200 else "error"
|
||||
return JSONResponse(status_code=code, content={"status": status, "message": ret} if status == "error" else {"status": status})
|
||||
# Success: return list of plugins impacted (may be empty set)
|
||||
return JSONResponse(status_code=200, content={"status": "success"})
|
||||
276
src/api/app/routers/instances.py
Normal file
276
src/api/app/routers/instances.py
Normal file
|
|
@ -0,0 +1,276 @@
|
|||
from fastapi import APIRouter, Depends
|
||||
from fastapi.responses import JSONResponse
|
||||
from typing import Optional, List, Tuple
|
||||
import re
|
||||
|
||||
from ..auth.guard import guard
|
||||
from ..deps import get_instances_api_caller, get_api_for_hostname
|
||||
from ..schemas import InstanceCreateRequest, InstancesDeleteRequest, InstanceUpdateRequest
|
||||
from ..config import api_config
|
||||
from ..utils import get_db, LOGGER
|
||||
|
||||
# Shared libs
|
||||
|
||||
|
||||
router = APIRouter(prefix="/instances", tags=["instances"])
|
||||
|
||||
|
||||
# ---------- Instance actions broadcasted to all instances ----------
|
||||
@router.get("/ping", dependencies=[Depends(guard)])
|
||||
def ping(api_caller=Depends(get_instances_api_caller)) -> JSONResponse:
|
||||
"""Ping all registered BunkerWeb instances to check their availability."""
|
||||
ok, responses = api_caller.send_to_apis("GET", "/ping", response=True)
|
||||
return JSONResponse(status_code=200 if ok else 502, content=responses or {"status": "error", "msg": "internal error"})
|
||||
|
||||
|
||||
@router.post("/reload", dependencies=[Depends(guard)])
|
||||
def reload_config(test: bool = True, api_caller=Depends(get_instances_api_caller)) -> JSONResponse:
|
||||
"""Reload configuration on all registered BunkerWeb instances.
|
||||
|
||||
Args:
|
||||
test: If True, validate configuration without applying it (default: True)
|
||||
"""
|
||||
test_arg = "yes" if test else "no"
|
||||
ok, _ = api_caller.send_to_apis("POST", f"/reload?test={test_arg}")
|
||||
return JSONResponse(status_code=200 if ok else 502, content={"status": "success" if ok else "error"})
|
||||
|
||||
|
||||
@router.post("/stop", dependencies=[Depends(guard)])
|
||||
def stop(api_caller=Depends(get_instances_api_caller)) -> JSONResponse:
|
||||
"""Stop all registered BunkerWeb instances."""
|
||||
ok, _ = api_caller.send_to_apis("POST", "/stop")
|
||||
return JSONResponse(status_code=200 if ok else 502, content={"status": "success" if ok else "error"})
|
||||
|
||||
|
||||
# ---------- Instance actions for a single instance ----------
|
||||
@router.get("/{hostname}/ping", dependencies=[Depends(guard)])
|
||||
def ping_one(hostname: str, api=Depends(get_api_for_hostname)) -> JSONResponse:
|
||||
"""Ping a specific BunkerWeb instance to check its availability.
|
||||
|
||||
Args:
|
||||
hostname: The hostname of the instance to ping
|
||||
"""
|
||||
sent, err, status, resp = api.request("GET", "/ping")
|
||||
if not sent or status != 200:
|
||||
return JSONResponse(status_code=502, content={"status": "error", "msg": (err or getattr(resp, "get", lambda _k: None)("msg")) or "internal error"})
|
||||
return JSONResponse(status_code=200, content=resp if isinstance(resp, dict) else {"status": "ok"})
|
||||
|
||||
|
||||
@router.post("/{hostname}/reload", dependencies=[Depends(guard)])
|
||||
def reload_one(hostname: str, test: bool = True, api=Depends(get_api_for_hostname)) -> JSONResponse:
|
||||
"""Reload configuration on a specific BunkerWeb instance.
|
||||
|
||||
Args:
|
||||
hostname: The hostname of the instance to reload
|
||||
test: If True, validate configuration without applying it (default: True)
|
||||
"""
|
||||
test_arg = "yes" if test else "no"
|
||||
sent, _err, status, _resp = api.request("POST", f"/reload?test={test_arg}")
|
||||
ok = bool(sent and status == 200)
|
||||
return JSONResponse(status_code=200 if ok else 502, content={"status": "success" if ok else "error"})
|
||||
|
||||
|
||||
@router.post("/{hostname}/stop", dependencies=[Depends(guard)])
|
||||
def stop_one(hostname: str, api=Depends(get_api_for_hostname)) -> JSONResponse:
|
||||
"""Stop a specific BunkerWeb instance.
|
||||
|
||||
Args:
|
||||
hostname: The hostname of the instance to stop
|
||||
"""
|
||||
sent, _err, status, _resp = api.request("POST", "/stop")
|
||||
ok = bool(sent and status == 200)
|
||||
return JSONResponse(status_code=200 if ok else 502, content={"status": "success" if ok else "error"})
|
||||
|
||||
|
||||
# -------------------- CRUD over BunkerWeb instances --------------------
|
||||
_DOMAIN_RE = re.compile(r"^(?!.*\\.\\.)[^\\s\/:]{1,256}$")
|
||||
|
||||
|
||||
def _normalize_hostname_and_port(hostname: str, port: Optional[int]) -> Tuple[str, Optional[int]]:
|
||||
# Strip scheme and split port if provided in hostname
|
||||
host = hostname.replace("http://", "").replace("https://", "").lower()
|
||||
if ":" in host:
|
||||
h, p = host.split(":", 1)
|
||||
try:
|
||||
return h, int(p)
|
||||
except ValueError:
|
||||
return h, port
|
||||
return host, port
|
||||
|
||||
|
||||
def _validate_port(port: Optional[int]) -> Optional[int]:
|
||||
"""Validate a TCP port (1..65535). Returns the int or raises ValueError."""
|
||||
if port is None:
|
||||
return None
|
||||
try:
|
||||
p = int(port)
|
||||
except Exception:
|
||||
raise ValueError("Port must be an integer")
|
||||
if p < 1 or p > 65535:
|
||||
raise ValueError("Port must be between 1 and 65535")
|
||||
return p
|
||||
|
||||
|
||||
@router.get("", dependencies=[Depends(guard)])
|
||||
def list_instances() -> JSONResponse:
|
||||
"""List all registered BunkerWeb instances with their details."""
|
||||
instances = get_db().get_instances()
|
||||
for instance in instances:
|
||||
instance["creation_date"] = instance["creation_date"].astimezone().isoformat()
|
||||
instance["last_seen"] = instance["last_seen"].astimezone().isoformat() if instance.get("last_seen") else None
|
||||
|
||||
return JSONResponse(status_code=200, content={"status": "success", "instances": instances})
|
||||
|
||||
|
||||
@router.post("", dependencies=[Depends(guard)])
|
||||
def create_instance(req: InstanceCreateRequest) -> JSONResponse:
|
||||
"""Create a new BunkerWeb instance.
|
||||
|
||||
Args:
|
||||
req: Instance creation request with hostname, port, server_name, etc.
|
||||
"""
|
||||
db = get_db()
|
||||
|
||||
# Derive defaults from api_config when not provided
|
||||
name = req.name or "manual instance"
|
||||
method = req.method or "api"
|
||||
hostname, port = _normalize_hostname_and_port(req.hostname, req.port)
|
||||
|
||||
if not _DOMAIN_RE.match(hostname):
|
||||
return JSONResponse(status_code=422, content={"status": "error", "message": f"Invalid hostname: {hostname}"})
|
||||
|
||||
server_name = req.server_name or api_config.internal_api_host_header
|
||||
# Validate provided port or use default from api_config
|
||||
if port is not None:
|
||||
try:
|
||||
port = _validate_port(port)
|
||||
except ValueError as ve:
|
||||
return JSONResponse(status_code=422, content={"status": "error", "message": f"Invalid port: {ve}"})
|
||||
else:
|
||||
try:
|
||||
port = _validate_port(int(api_config.internal_api_port))
|
||||
except Exception:
|
||||
LOGGER.exception("Invalid API_HTTP_PORT in api_config; must be 1..65535")
|
||||
return JSONResponse(status_code=500, content={"status": "error", "message": "internal error"})
|
||||
|
||||
err = db.add_instance(hostname=hostname, port=port, server_name=server_name, method=method, name=name)
|
||||
if err:
|
||||
code = 400 if "already exists" in err or "read-only" in err else 500
|
||||
return JSONResponse(status_code=code, content={"status": "error", "message": err})
|
||||
|
||||
return JSONResponse(
|
||||
status_code=201,
|
||||
content={"status": "success", "instance": {"hostname": hostname, "name": name, "port": port, "server_name": server_name, "method": method}},
|
||||
)
|
||||
|
||||
|
||||
@router.get("/{hostname}", dependencies=[Depends(guard)])
|
||||
def get_instance(hostname: str) -> JSONResponse:
|
||||
"""Get details of a specific BunkerWeb instance.
|
||||
|
||||
Args:
|
||||
hostname: The hostname of the instance to retrieve
|
||||
"""
|
||||
instance = get_db().get_instance(hostname)
|
||||
if not instance:
|
||||
return JSONResponse(status_code=404, content={"status": "error", "message": f"Instance {hostname} not found"})
|
||||
|
||||
instance["creation_date"] = instance["creation_date"].astimezone().isoformat()
|
||||
instance["last_seen"] = instance["last_seen"].astimezone().isoformat() if instance.get("last_seen") else None
|
||||
return JSONResponse(status_code=200, content={"status": "success", "instance": instance})
|
||||
|
||||
|
||||
@router.patch("/{hostname}", dependencies=[Depends(guard)])
|
||||
def update_instance(hostname: str, req: InstanceUpdateRequest) -> JSONResponse:
|
||||
"""Update properties of a specific BunkerWeb instance.
|
||||
|
||||
Args:
|
||||
hostname: The hostname of the instance to update
|
||||
req: Update request with new values for name, port, server_name, method
|
||||
"""
|
||||
db = get_db()
|
||||
|
||||
# Validate optional port if provided
|
||||
if req.port is not None:
|
||||
try:
|
||||
_ = _validate_port(req.port)
|
||||
except ValueError as ve:
|
||||
return JSONResponse(status_code=422, content={"status": "error", "message": f"Invalid port: {ve}"})
|
||||
|
||||
err = db.update_instance_fields(
|
||||
hostname,
|
||||
name=req.name,
|
||||
port=int(req.port) if req.port is not None else None,
|
||||
server_name=req.server_name,
|
||||
method=req.method,
|
||||
)
|
||||
if err:
|
||||
code = 400 if ("does not exist" in err or "read-only" in err) else 500
|
||||
return JSONResponse(status_code=code, content={"status": "error", "message": err})
|
||||
|
||||
instance = db.get_instance(hostname)
|
||||
return JSONResponse(status_code=200, content={"status": "success", "instance": instance})
|
||||
|
||||
|
||||
@router.delete("/{hostname}", dependencies=[Depends(guard)])
|
||||
def delete_instance(hostname: str) -> JSONResponse:
|
||||
"""Delete a specific BunkerWeb instance.
|
||||
|
||||
Args:
|
||||
hostname: The hostname of the instance to delete
|
||||
"""
|
||||
db = get_db()
|
||||
|
||||
inst = db.get_instance(hostname)
|
||||
if not inst:
|
||||
return JSONResponse(status_code=404, content={"status": "error", "message": f"Instance {hostname} not found"})
|
||||
if inst.get("method") != "api":
|
||||
return JSONResponse(status_code=400, content={"status": "error", "message": f"Instance {hostname} is not an API instance"})
|
||||
|
||||
err = db.delete_instance(hostname)
|
||||
if err:
|
||||
LOGGER.exception(f"DELETE /instances/{hostname} failed: {err}")
|
||||
return JSONResponse(status_code=500, content={"status": "error", "message": err})
|
||||
|
||||
return JSONResponse(status_code=200, content={"status": "success", "deleted": hostname})
|
||||
|
||||
|
||||
@router.delete("", dependencies=[Depends(guard)])
|
||||
def delete_instances(req: InstancesDeleteRequest) -> JSONResponse:
|
||||
"""Delete multiple BunkerWeb instances.
|
||||
|
||||
Args:
|
||||
req: Request containing list of hostnames to delete
|
||||
"""
|
||||
db = get_db()
|
||||
|
||||
# Only delete instances created via API
|
||||
existing = {inst["hostname"]: inst for inst in db.get_instances()}
|
||||
|
||||
to_delete: List[str] = []
|
||||
skipped: List[str] = []
|
||||
for h in req.instances:
|
||||
inst = existing.get(h)
|
||||
if not inst:
|
||||
skipped.append(h)
|
||||
continue
|
||||
if inst.get("method") != "api":
|
||||
skipped.append(h)
|
||||
continue
|
||||
to_delete.append(h)
|
||||
|
||||
if not to_delete:
|
||||
return JSONResponse(
|
||||
status_code=404,
|
||||
content={
|
||||
"status": "error",
|
||||
"message": "No deletable API instances found among selection",
|
||||
"skipped": skipped,
|
||||
},
|
||||
)
|
||||
|
||||
err = db.delete_instances(to_delete)
|
||||
if err:
|
||||
return JSONResponse(status_code=500, content={"status": "error", "message": err, "skipped": skipped})
|
||||
|
||||
return JSONResponse(status_code=200, content={"status": "success", "deleted": to_delete, "skipped": skipped})
|
||||
32
src/api/app/routers/jobs.py
Normal file
32
src/api/app/routers/jobs.py
Normal file
|
|
@ -0,0 +1,32 @@
|
|||
from fastapi import APIRouter, Depends
|
||||
from fastapi.responses import JSONResponse
|
||||
from ..schemas import RunJobsRequest
|
||||
|
||||
from ..auth.guard import guard
|
||||
from ..utils import get_db
|
||||
|
||||
|
||||
router = APIRouter(prefix="/jobs", tags=["jobs"])
|
||||
|
||||
|
||||
@router.get("", dependencies=[Depends(guard)])
|
||||
def list_jobs() -> JSONResponse:
|
||||
"""List all jobs with their history and cache metadata."""
|
||||
jobs = get_db().get_jobs()
|
||||
return JSONResponse(status_code=200, content={"status": "success", "jobs": jobs})
|
||||
|
||||
|
||||
@router.post("/run", dependencies=[Depends(guard)])
|
||||
def run_jobs(payload: RunJobsRequest) -> JSONResponse:
|
||||
"""Trigger execution of specified jobs' plugins.
|
||||
|
||||
Args:
|
||||
payload: Request containing list of jobs to run with plugin and name
|
||||
"""
|
||||
plugins = [j.plugin for j in payload.jobs]
|
||||
|
||||
ret = get_db().checked_changes(["config"], plugins_changes=plugins.copy(), value=True)
|
||||
if ret:
|
||||
# DB returns error string on failure
|
||||
return JSONResponse(status_code=500, content={"status": "error", "message": str(ret)})
|
||||
return JSONResponse(status_code=202, content={"status": "success", "message": "Jobs scheduled"})
|
||||
299
src/api/app/routers/plugins.py
Normal file
299
src/api/app/routers/plugins.py
Normal file
|
|
@ -0,0 +1,299 @@
|
|||
from contextlib import suppress
|
||||
from io import BytesIO
|
||||
from json import JSONDecodeError, loads as json_loads
|
||||
from os import sep
|
||||
from pathlib import Path
|
||||
from re import compile as re_compile
|
||||
from tarfile import TarFile, open as tar_open
|
||||
from typing import Any, Dict, List, Optional
|
||||
from zipfile import ZipFile, BadZipFile
|
||||
|
||||
from fastapi import APIRouter, Depends, File, Form, UploadFile
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
from ..auth.guard import guard
|
||||
from ..utils import get_db
|
||||
|
||||
from common_utils import bytes_hash # type: ignore
|
||||
|
||||
|
||||
router = APIRouter(prefix="/plugins", tags=["plugins"])
|
||||
|
||||
_PLUGIN_ID_RX = re_compile(r"^[\w.-]{4,64}$")
|
||||
_RECOGNIZED_TYPES = {"all", "external", "ui", "pro"}
|
||||
|
||||
TMP_UI_ROOT = Path(sep, "var", "tmp", "bunkerweb", "ui")
|
||||
|
||||
|
||||
def _safe_member_path(root: Path, member_name: str) -> Optional[Path]:
|
||||
try:
|
||||
# Prevent absolute paths and path traversal
|
||||
if member_name.startswith("/"):
|
||||
return None
|
||||
target = (root / member_name).resolve()
|
||||
if not str(target).startswith(str(root.resolve())):
|
||||
return None
|
||||
return target
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def _extract_plugin_from_tar(tar: TarFile, root_dir: str, dest: Path) -> None:
|
||||
for member in tar.getmembers():
|
||||
# Filter only entries under the plugin root dir
|
||||
name = member.name
|
||||
if root_dir:
|
||||
if not name.startswith(root_dir + "/") and name != root_dir:
|
||||
continue
|
||||
rel = name[len(root_dir) + 1 :] if name != root_dir else "" # noqa: E203
|
||||
else:
|
||||
rel = name
|
||||
if rel == "":
|
||||
continue
|
||||
target = _safe_member_path(dest, rel)
|
||||
if target is None:
|
||||
continue
|
||||
if member.isdir():
|
||||
target.mkdir(parents=True, exist_ok=True)
|
||||
elif member.isfile() or member.isreg():
|
||||
target.parent.mkdir(parents=True, exist_ok=True)
|
||||
with tar.extractfile(member) as src: # type: ignore[arg-type]
|
||||
if src is None:
|
||||
continue
|
||||
target.write_bytes(src.read())
|
||||
|
||||
|
||||
def _extract_plugin_from_zip(zipf: ZipFile, root_dir: str, dest: Path) -> None:
|
||||
for name in zipf.namelist():
|
||||
if root_dir:
|
||||
if not name.startswith(root_dir + "/") and name != root_dir:
|
||||
continue
|
||||
rel = name[len(root_dir) + 1 :] if name != root_dir else "" # noqa: E203
|
||||
else:
|
||||
rel = name
|
||||
if not rel or rel.endswith("/"):
|
||||
# Directory
|
||||
d = _safe_member_path(dest, rel)
|
||||
if d is not None:
|
||||
d.mkdir(parents=True, exist_ok=True)
|
||||
continue
|
||||
target = _safe_member_path(dest, rel)
|
||||
if target is None:
|
||||
continue
|
||||
target.parent.mkdir(parents=True, exist_ok=True)
|
||||
with zipf.open(name) as src:
|
||||
target.write_bytes(src.read())
|
||||
|
||||
|
||||
def _find_plugin_roots_in_tar(tar: TarFile) -> List[str]:
|
||||
roots: List[str] = []
|
||||
names = [m.name for m in tar.getmembers()]
|
||||
for n in names:
|
||||
if n.endswith("plugin.json"):
|
||||
parent = str(Path(n).parent)
|
||||
roots.append(parent)
|
||||
# Normalize root of "." when plugin.json is at archive root
|
||||
return [r if r != "." else "" for r in roots]
|
||||
|
||||
|
||||
def _find_plugin_roots_in_zip(zipf: ZipFile) -> List[str]:
|
||||
roots: List[str] = []
|
||||
for n in zipf.namelist():
|
||||
if n.endswith("plugin.json"):
|
||||
parent = str(Path(n).parent)
|
||||
roots.append(parent)
|
||||
return [r if r != "." else "" for r in roots]
|
||||
|
||||
|
||||
@router.get("", dependencies=[Depends(guard)])
|
||||
def list_plugins(type: str = "all", with_data: bool = False) -> JSONResponse: # noqa: A002
|
||||
"""List plugins of specified type.
|
||||
|
||||
Args:
|
||||
type: Plugin type filter ("all", "external", "ui", "pro")
|
||||
with_data: Include plugin data/content
|
||||
"""
|
||||
if type not in _RECOGNIZED_TYPES:
|
||||
return JSONResponse(status_code=422, content={"status": "error", "message": "Invalid type"})
|
||||
plugins = get_db().get_plugins(_type=type, with_data=with_data)
|
||||
return JSONResponse(status_code=200, content={"status": "success", "plugins": plugins})
|
||||
|
||||
|
||||
@router.delete("/{plugin_id}", dependencies=[Depends(guard)])
|
||||
def delete_plugin(plugin_id: str) -> JSONResponse:
|
||||
"""Delete a UI plugin.
|
||||
|
||||
Args:
|
||||
plugin_id: ID of the plugin to delete
|
||||
"""
|
||||
if not _PLUGIN_ID_RX.match(plugin_id):
|
||||
return JSONResponse(status_code=422, content={"status": "error", "message": "Invalid plugin id"})
|
||||
err = get_db().delete_plugin(plugin_id, "ui", changes=True)
|
||||
if err:
|
||||
return JSONResponse(status_code=404, content={"status": "error", "message": err})
|
||||
return JSONResponse(status_code=200, content={"status": "success"})
|
||||
|
||||
|
||||
@router.post("/upload", dependencies=[Depends(guard)])
|
||||
async def upload_plugins(files: List[UploadFile] = File(...), method: str = Form("ui")) -> JSONResponse:
|
||||
"""Upload and install UI plugins from archive files.
|
||||
|
||||
Supports .zip, .tar.gz, .tar.xz formats. Each archive may contain
|
||||
multiple plugins if they have separate plugin.json files.
|
||||
|
||||
Args:
|
||||
files: Archive files containing plugins
|
||||
method: Installation method (currently only "ui" supported)
|
||||
"""
|
||||
if method != "ui":
|
||||
return JSONResponse(status_code=422, content={"status": "error", "message": "Only method=ui is supported"})
|
||||
|
||||
db = get_db()
|
||||
created: List[str] = []
|
||||
errors: List[Dict[str, str]] = []
|
||||
|
||||
# Build set of existing UI plugin ids to avoid collisions
|
||||
try:
|
||||
existing_ids = {p.get("id") for p in db.get_plugins(_type="ui", with_data=False)}
|
||||
except Exception:
|
||||
existing_ids = set()
|
||||
|
||||
TMP_UI_ROOT.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
for up in files:
|
||||
try:
|
||||
filename = up.filename or ""
|
||||
lower = filename.lower()
|
||||
data = await up.read()
|
||||
if not lower.endswith((".zip", ".tar.gz", ".tar.xz")):
|
||||
errors.append({"file": filename, "error": "Unsupported archive format"})
|
||||
continue
|
||||
|
||||
# Parse archive and find plugin roots
|
||||
plugin_roots: List[str] = []
|
||||
is_zip = lower.endswith(".zip")
|
||||
if is_zip:
|
||||
try:
|
||||
with ZipFile(BytesIO(data)) as zipf:
|
||||
plugin_roots = _find_plugin_roots_in_zip(zipf)
|
||||
if not plugin_roots:
|
||||
errors.append({"file": filename, "error": "plugin.json not found"})
|
||||
continue
|
||||
# Process each plugin root found
|
||||
for root in plugin_roots:
|
||||
# Load plugin.json
|
||||
pj_path = f"{root + '/' if root else ''}plugin.json"
|
||||
try:
|
||||
meta = json_loads(zipf.read(pj_path).decode("utf-8"))
|
||||
except KeyError:
|
||||
errors.append({"file": filename, "error": "Invalid plugin.json location"})
|
||||
continue
|
||||
except JSONDecodeError as e:
|
||||
errors.append({"file": filename, "error": f"Invalid plugin.json: {e}"})
|
||||
continue
|
||||
|
||||
pid = str(meta.get("id", ""))
|
||||
if not _PLUGIN_ID_RX.match(pid):
|
||||
errors.append({"file": filename, "error": f"Invalid plugin id '{pid}'"})
|
||||
continue
|
||||
if pid in existing_ids:
|
||||
errors.append({"file": filename, "error": f"Plugin {pid} already exists"})
|
||||
continue
|
||||
|
||||
# Extract to /var/tmp/bunkerweb/ui/<id>
|
||||
dest = TMP_UI_ROOT / pid
|
||||
if dest.exists():
|
||||
# Clean previous tmp content
|
||||
for p in sorted(dest.rglob("*"), reverse=True):
|
||||
with suppress(Exception):
|
||||
p.unlink() if p.is_file() else p.rmdir()
|
||||
with suppress(Exception):
|
||||
dest.rmdir()
|
||||
dest.mkdir(parents=True, exist_ok=True)
|
||||
_extract_plugin_from_zip(zipf, root, dest)
|
||||
|
||||
# Package full plugin dir to tar.gz bytes
|
||||
with BytesIO() as buf:
|
||||
with tar_open(fileobj=buf, mode="w:gz", compresslevel=9) as tf:
|
||||
tf.add(dest, arcname=dest.name, recursive=True)
|
||||
buf.seek(0)
|
||||
blob = buf.getvalue()
|
||||
checksum = bytes_hash(BytesIO(blob), algorithm="sha256")
|
||||
|
||||
# Compute flags and call DB update
|
||||
page = dest.joinpath("ui").is_dir()
|
||||
plugin_item = meta | {"type": "ui", "page": page, "method": "ui", "data": blob, "checksum": checksum}
|
||||
err = db.update_external_plugins([plugin_item], _type="ui", delete_missing=False)
|
||||
if err:
|
||||
errors.append({"file": filename, "error": err})
|
||||
else:
|
||||
created.append(pid)
|
||||
existing_ids.add(pid)
|
||||
except BadZipFile:
|
||||
errors.append({"file": filename, "error": "Invalid zip archive"})
|
||||
continue
|
||||
|
||||
# Tar formats
|
||||
try:
|
||||
with tar_open(fileobj=BytesIO(data)) as tarf:
|
||||
roots = _find_plugin_roots_in_tar(tarf)
|
||||
if not roots:
|
||||
errors.append({"file": filename, "error": "plugin.json not found"})
|
||||
continue
|
||||
for root in roots:
|
||||
try:
|
||||
pj_member = next(m for m in tarf.getmembers() if m.name == (root + "/plugin.json" if root else "plugin.json"))
|
||||
meta = json_loads(tarf.extractfile(pj_member).read().decode("utf-8")) # type: ignore[arg-type]
|
||||
except StopIteration:
|
||||
errors.append({"file": filename, "error": "Invalid plugin.json location"})
|
||||
continue
|
||||
except JSONDecodeError as e:
|
||||
errors.append({"file": filename, "error": f"Invalid plugin.json: {e}"})
|
||||
continue
|
||||
|
||||
pid = str(meta.get("id", ""))
|
||||
if not _PLUGIN_ID_RX.match(pid):
|
||||
errors.append({"file": filename, "error": f"Invalid plugin id '{pid}'"})
|
||||
continue
|
||||
if pid in existing_ids:
|
||||
errors.append({"file": filename, "error": f"Plugin {pid} already exists"})
|
||||
continue
|
||||
|
||||
dest = TMP_UI_ROOT / pid
|
||||
if dest.exists():
|
||||
for p in sorted(dest.rglob("*"), reverse=True):
|
||||
with suppress(Exception):
|
||||
p.unlink() if p.is_file() else p.rmdir()
|
||||
with suppress(Exception):
|
||||
dest.rmdir()
|
||||
dest.mkdir(parents=True, exist_ok=True)
|
||||
_extract_plugin_from_tar(tarf, root, dest)
|
||||
|
||||
with BytesIO() as buf:
|
||||
with tar_open(fileobj=buf, mode="w:gz", compresslevel=9) as tf:
|
||||
tf.add(dest, arcname=dest.name, recursive=True)
|
||||
buf.seek(0)
|
||||
blob = buf.getvalue()
|
||||
checksum = bytes_hash(BytesIO(blob), algorithm="sha256")
|
||||
|
||||
page = dest.joinpath("ui").is_dir()
|
||||
plugin_item = meta | {"type": "ui", "page": page, "method": "ui", "data": blob, "checksum": checksum}
|
||||
err = db.update_external_plugins([plugin_item], _type="ui", delete_missing=False)
|
||||
if err:
|
||||
errors.append({"file": filename, "error": err})
|
||||
else:
|
||||
created.append(pid)
|
||||
existing_ids.add(pid)
|
||||
except Exception as e:
|
||||
errors.append({"file": filename, "error": f"Invalid tar archive: {e}"})
|
||||
continue
|
||||
except Exception as e:
|
||||
errors.append({"file": up.filename or "(unknown)", "error": str(e)})
|
||||
|
||||
status = 207 if errors and created else (400 if errors and not created else 201)
|
||||
body: Dict[str, Any] = {"status": "success" if created and not errors else ("partial" if created else "error")}
|
||||
if created:
|
||||
body["created"] = sorted(created)
|
||||
if errors:
|
||||
body["errors"] = errors
|
||||
return JSONResponse(status_code=status, content=body)
|
||||
196
src/api/app/routers/services.py
Normal file
196
src/api/app/routers/services.py
Normal file
|
|
@ -0,0 +1,196 @@
|
|||
from contextlib import suppress
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from fastapi import APIRouter, Depends, Query
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
from ..auth.guard import guard
|
||||
from ..utils import get_db
|
||||
from ..schemas import ServiceCreateRequest, ServiceUpdateRequest
|
||||
|
||||
|
||||
router = APIRouter(prefix="/services", tags=["services"])
|
||||
|
||||
|
||||
def _iso(dt) -> Optional[str]:
|
||||
with suppress(Exception):
|
||||
return dt.astimezone().isoformat()
|
||||
return None
|
||||
|
||||
|
||||
@router.get("", dependencies=[Depends(guard)])
|
||||
def list_services(with_drafts: bool = True) -> JSONResponse:
|
||||
"""List all services with their configurations.
|
||||
|
||||
Args:
|
||||
with_drafts: Include draft services in the results (default: True)
|
||||
"""
|
||||
services = get_db().get_services(with_drafts=with_drafts)
|
||||
for it in services:
|
||||
it["creation_date"] = _iso(it.get("creation_date"))
|
||||
it["last_update"] = _iso(it.get("last_update"))
|
||||
return JSONResponse(status_code=200, content={"status": "success", "services": services})
|
||||
|
||||
|
||||
@router.get("/{service}", dependencies=[Depends(guard)])
|
||||
def get_service(service: str, full: bool = False, methods: bool = True, with_drafts: bool = True) -> JSONResponse:
|
||||
"""Get configuration for a specific service.
|
||||
|
||||
Args:
|
||||
service: Service identifier
|
||||
full: Return complete configuration including defaults
|
||||
methods: Include method metadata for each setting
|
||||
with_drafts: Include draft services when computing templates
|
||||
"""
|
||||
db = get_db()
|
||||
# Check existence
|
||||
exists = any(s.get("id") == service for s in db.get_services(with_drafts=True))
|
||||
if not exists:
|
||||
return JSONResponse(status_code=404, content={"status": "error", "message": f"Service {service} not found"})
|
||||
|
||||
if full:
|
||||
conf = db.get_config(methods=methods, with_drafts=with_drafts, service=service)
|
||||
return JSONResponse(status_code=200, content={"status": "success", "service": service, "config": conf})
|
||||
|
||||
conf = db.get_non_default_settings(methods=methods, with_drafts=with_drafts, service=service)
|
||||
return JSONResponse(status_code=200, content={"status": "success", "service": service, "config": conf})
|
||||
|
||||
|
||||
def _full_config_snapshot() -> Dict[str, Any]:
|
||||
"""Return a full config snapshot (global + services) as flat dict of values only."""
|
||||
return get_db().get_non_default_settings(methods=False, with_drafts=True)
|
||||
|
||||
|
||||
def _persist_config(config: Dict[str, Any]) -> JSONResponse:
|
||||
ret = get_db().save_config(config, "api", changed=True)
|
||||
|
||||
if isinstance(ret, str):
|
||||
code = 400 if ("read-only" in ret or "already exists" in ret or "doesn't exist" in ret) else 500
|
||||
return JSONResponse(status_code=code, content={"status": "error", "message": ret})
|
||||
return JSONResponse(status_code=200, content={"status": "success", "changed_plugins": sorted(list(ret))})
|
||||
|
||||
|
||||
@router.post("", dependencies=[Depends(guard)])
|
||||
def create_service(req: ServiceCreateRequest) -> JSONResponse:
|
||||
"""Create a new service with the specified configuration.
|
||||
|
||||
Args:
|
||||
req: Service creation request with server_name, variables, and draft status
|
||||
"""
|
||||
conf = _full_config_snapshot()
|
||||
name = req.server_name.split(" ")[0].strip()
|
||||
if not name:
|
||||
return JSONResponse(status_code=422, content={"status": "error", "message": "server_name is required"})
|
||||
|
||||
# Reject duplicates
|
||||
existing = set((conf.get("SERVER_NAME", "") or "").split())
|
||||
if name in existing:
|
||||
return JSONResponse(status_code=400, content={"status": "error", "message": f"Service {name} already exists"})
|
||||
|
||||
# Draft flag
|
||||
conf[f"{name}_IS_DRAFT"] = "yes" if req.is_draft else "no"
|
||||
|
||||
# Set provided variables (unprefixed)
|
||||
for k, v in (req.variables or {}).items():
|
||||
if isinstance(v, (dict, list)):
|
||||
return JSONResponse(status_code=422, content={"status": "error", "message": f"Invalid value for {k}: must be scalar"})
|
||||
conf[f"{name}_{k}"] = "" if v is None else v
|
||||
|
||||
if "SERVER_NAME" not in (req.variables or {}):
|
||||
conf[f"{name}_SERVER_NAME"] = name
|
||||
|
||||
conf["SERVER_NAME"] = " ".join(sorted(existing | {name}))
|
||||
|
||||
return _persist_config(conf)
|
||||
|
||||
|
||||
@router.patch("/{service}", dependencies=[Depends(guard)])
|
||||
def update_service(service: str, req: ServiceUpdateRequest) -> JSONResponse:
|
||||
"""Update an existing service's configuration.
|
||||
|
||||
Args:
|
||||
service: Current service identifier
|
||||
req: Update request with new server_name, variables, and draft status
|
||||
"""
|
||||
conf = _full_config_snapshot()
|
||||
services_list = (conf.get("SERVER_NAME", "") or "").split()
|
||||
if service not in services_list:
|
||||
return JSONResponse(status_code=404, content={"status": "error", "message": f"Service {service} not found"})
|
||||
|
||||
target = service
|
||||
# Handle rename
|
||||
if req.server_name:
|
||||
new_name = req.server_name.split(" ")[0].strip()
|
||||
if not new_name:
|
||||
return JSONResponse(status_code=422, content={"status": "error", "message": "server_name cannot be empty"})
|
||||
if new_name != service and new_name in services_list:
|
||||
return JSONResponse(status_code=400, content={"status": "error", "message": f"Service {new_name} already exists"})
|
||||
|
||||
# Replace in SERVER_NAME and prefix keys
|
||||
services_list = [new_name if s == service else s for s in services_list]
|
||||
conf["SERVER_NAME"] = " ".join(services_list)
|
||||
# Rename prefixed keys
|
||||
renames: List[tuple[str, str]] = []
|
||||
for key in list(conf.keys()):
|
||||
if key.startswith(f"{service}_"):
|
||||
suffix = key[len(service) + 1 :] # noqa: E203
|
||||
renames.append((key, f"{new_name}_{suffix}"))
|
||||
for old, new in renames:
|
||||
conf[new] = conf.pop(old)
|
||||
target = new_name
|
||||
|
||||
# Draft flag update
|
||||
if req.is_draft is not None:
|
||||
conf[f"{target}_IS_DRAFT"] = "yes" if bool(req.is_draft) else "no"
|
||||
|
||||
# Update provided variables (unprefixed)
|
||||
for k, v in (req.variables or {}).items():
|
||||
if k == "SERVER_NAME":
|
||||
# Ignore direct edits to SERVER_NAME via variables
|
||||
continue
|
||||
if isinstance(v, (dict, list)):
|
||||
return JSONResponse(status_code=422, content={"status": "error", "message": f"Invalid value for {k}: must be scalar"})
|
||||
conf[f"{target}_{k}"] = "" if v is None else v
|
||||
|
||||
return _persist_config(conf)
|
||||
|
||||
|
||||
@router.delete("/{service}", dependencies=[Depends(guard)])
|
||||
def delete_service(service: str) -> JSONResponse:
|
||||
"""Delete a service and all its configuration.
|
||||
|
||||
Args:
|
||||
service: Service identifier to delete
|
||||
"""
|
||||
conf = _full_config_snapshot()
|
||||
services_list = (conf.get("SERVER_NAME", "") or "").split()
|
||||
if service not in services_list:
|
||||
return JSONResponse(status_code=404, content={"status": "error", "message": f"Service {service} not found"})
|
||||
|
||||
# Remove from server list
|
||||
conf["SERVER_NAME"] = " ".join([s for s in services_list if s != service])
|
||||
# Drop prefixed keys
|
||||
for key in list(conf.keys()):
|
||||
if key.startswith(f"{service}_"):
|
||||
conf.pop(key)
|
||||
|
||||
return _persist_config(conf)
|
||||
|
||||
|
||||
@router.post("/{service}/convert", dependencies=[Depends(guard)])
|
||||
def convert_service(service: str, convert_to: str = Query(..., pattern="^(online|draft)$")) -> JSONResponse:
|
||||
"""Convert a service between online and draft status.
|
||||
|
||||
Args:
|
||||
service: Service identifier
|
||||
convert_to: Target status ("online" or "draft")
|
||||
"""
|
||||
conf = _full_config_snapshot()
|
||||
services_list = (conf.get("SERVER_NAME", "") or "").split()
|
||||
to_convert = [s for s in (service,) if s in services_list]
|
||||
if not to_convert:
|
||||
return JSONResponse(status_code=400, content={"status": "error", "message": "No valid services to convert"})
|
||||
to_val = "no" if convert_to == "online" else "yes"
|
||||
for s in to_convert:
|
||||
conf[f"{s}_IS_DRAFT"] = to_val
|
||||
return _persist_config(conf)
|
||||
233
src/api/app/schemas.py
Normal file
233
src/api/app/schemas.py
Normal file
|
|
@ -0,0 +1,233 @@
|
|||
from pydantic import BaseModel, Field, field_validator, RootModel
|
||||
from typing import Optional, List, Dict, Union
|
||||
from re import compile as re_compile
|
||||
|
||||
# Shared helpers for Configs
|
||||
NAME_RX = re_compile(r"^[\w_-]{1,64}$")
|
||||
|
||||
|
||||
def normalize_config_type(t: str) -> str:
|
||||
return t.strip().replace("-", "_").lower()
|
||||
|
||||
|
||||
def validate_config_name(name: str) -> Optional[str]:
|
||||
if not name or not NAME_RX.match(name):
|
||||
return "Invalid name: must match ^[\\w_-]{1,64}$"
|
||||
return None
|
||||
|
||||
|
||||
# Accepted config types (normalized form)
|
||||
CONFIG_TYPES = {
|
||||
# HTTP-level
|
||||
"http",
|
||||
"server_http",
|
||||
"default_server_http",
|
||||
# ModSecurity
|
||||
"modsec_crs",
|
||||
"modsec",
|
||||
# Stream
|
||||
"stream",
|
||||
"server_stream",
|
||||
# CRS plugins
|
||||
"crs_plugins_before",
|
||||
"crs_plugins_after",
|
||||
}
|
||||
|
||||
|
||||
class BanRequest(BaseModel):
|
||||
ip: str
|
||||
exp: int = Field(86400, description="Expiration in seconds (0 means permanent)")
|
||||
reason: str = Field("api", description="Reason for ban")
|
||||
service: Optional[str] = Field(None, description="Service name if service-specific ban")
|
||||
|
||||
|
||||
class UnbanRequest(BaseModel):
|
||||
ip: str
|
||||
service: Optional[str] = Field(None, description="Service name if service-specific unban")
|
||||
|
||||
|
||||
# Instances
|
||||
class InstanceCreateRequest(BaseModel):
|
||||
hostname: str
|
||||
name: Optional[str] = Field(None, description="Friendly name for the instance")
|
||||
port: Optional[int] = Field(None, description="API HTTP port; defaults from settings if omitted")
|
||||
server_name: Optional[str] = Field(None, description="API server_name/Host header; defaults if omitted")
|
||||
method: Optional[str] = Field("ui", description='Source method tag (defaults to "ui")')
|
||||
|
||||
|
||||
class InstancesDeleteRequest(BaseModel):
|
||||
instances: List[str]
|
||||
|
||||
|
||||
class InstanceUpdateRequest(BaseModel):
|
||||
name: Optional[str] = Field(None, description="Friendly name for the instance")
|
||||
port: Optional[int] = Field(None, description="API HTTP port")
|
||||
server_name: Optional[str] = Field(None, description="API server_name/Host header")
|
||||
method: Optional[str] = Field(None, description="Source method tag")
|
||||
|
||||
|
||||
# Services
|
||||
class ServiceCreateRequest(BaseModel):
|
||||
server_name: str = Field(..., description="Service server_name (first token used as ID)")
|
||||
is_draft: bool = Field(False, description="Create as draft service")
|
||||
variables: Optional[Dict[str, str]] = Field(None, description="Unprefixed settings for the service")
|
||||
|
||||
|
||||
class ServiceUpdateRequest(BaseModel):
|
||||
server_name: Optional[str] = Field(None, description="Rename the service (first token used as ID)")
|
||||
is_draft: Optional[bool] = Field(None, description="Set draft flag")
|
||||
variables: Optional[Dict[str, str]] = Field(None, description="Unprefixed settings to upsert for the service")
|
||||
|
||||
|
||||
# Configs
|
||||
class ConfigCreateRequest(BaseModel):
|
||||
service: Optional[str] = Field(None, description='Service id; use "global" or leave empty for global')
|
||||
type: str = Field(..., description="Config type, e.g., http, server_http, modsec, ...")
|
||||
name: str = Field(..., description=r"Config name (^[\\w_-]{1,64}$)")
|
||||
data: str = Field(..., description="Config content as UTF-8 string")
|
||||
|
||||
@field_validator("service")
|
||||
@classmethod
|
||||
def _normalize_service(cls, v: Optional[str]) -> Optional[str]:
|
||||
return None if v in (None, "", "global") else v
|
||||
|
||||
@field_validator("type")
|
||||
@classmethod
|
||||
def _normalize_and_check_type(cls, v: str) -> str:
|
||||
t = normalize_config_type(v)
|
||||
if t not in CONFIG_TYPES:
|
||||
raise ValueError("Invalid type")
|
||||
return t
|
||||
|
||||
@field_validator("name")
|
||||
@classmethod
|
||||
def _validate_name(cls, v: str) -> str:
|
||||
v = v.strip()
|
||||
err = validate_config_name(v)
|
||||
if err:
|
||||
raise ValueError(err)
|
||||
return v
|
||||
|
||||
|
||||
class ConfigUpdateRequest(BaseModel):
|
||||
service: Optional[str] = Field(None, description='New service id; use "global" or leave empty for global')
|
||||
type: Optional[str] = Field(None, description="New config type")
|
||||
name: Optional[str] = Field(None, description="New config name")
|
||||
data: Optional[str] = Field(None, description="New config content as UTF-8 string")
|
||||
|
||||
@field_validator("service")
|
||||
@classmethod
|
||||
def _normalize_service(cls, v: Optional[str]) -> Optional[str]:
|
||||
return None if v in (None, "", "global") else v
|
||||
|
||||
@field_validator("type")
|
||||
@classmethod
|
||||
def _normalize_and_check_type(cls, v: Optional[str]) -> Optional[str]:
|
||||
if v is None:
|
||||
return v
|
||||
t = normalize_config_type(v)
|
||||
if t not in CONFIG_TYPES:
|
||||
raise ValueError("Invalid type")
|
||||
return t
|
||||
|
||||
@field_validator("name")
|
||||
@classmethod
|
||||
def _validate_name(cls, v: Optional[str]) -> Optional[str]:
|
||||
if v is None:
|
||||
return v
|
||||
v = v.strip()
|
||||
err = validate_config_name(v)
|
||||
if err:
|
||||
raise ValueError(err)
|
||||
return v
|
||||
|
||||
|
||||
class ConfigKey(BaseModel):
|
||||
service: Optional[str] = Field(None, description='Service id; use "global" or leave empty for global')
|
||||
type: str
|
||||
name: str
|
||||
|
||||
@field_validator("service")
|
||||
@classmethod
|
||||
def _normalize_service(cls, v: Optional[str]) -> Optional[str]:
|
||||
return None if v in (None, "", "global") else v
|
||||
|
||||
@field_validator("type")
|
||||
@classmethod
|
||||
def _normalize_and_check_type(cls, v: str) -> str:
|
||||
t = normalize_config_type(v)
|
||||
if t not in CONFIG_TYPES:
|
||||
raise ValueError("Invalid type")
|
||||
return t
|
||||
|
||||
@field_validator("name")
|
||||
@classmethod
|
||||
def _validate_name(cls, v: str) -> str:
|
||||
v = v.strip()
|
||||
err = validate_config_name(v)
|
||||
if err:
|
||||
raise ValueError(err)
|
||||
return v
|
||||
|
||||
|
||||
class ConfigsDeleteRequest(BaseModel):
|
||||
configs: List[ConfigKey] = Field(..., min_length=1)
|
||||
|
||||
|
||||
# Cache
|
||||
class CacheFileKey(BaseModel):
|
||||
service: Optional[str] = Field(None, description='Service id; use "global" or leave empty for global')
|
||||
plugin: str
|
||||
jobName: str
|
||||
fileName: str
|
||||
|
||||
@field_validator("service")
|
||||
@classmethod
|
||||
def _normalize_service(cls, v: Optional[str]) -> Optional[str]:
|
||||
return None if v in (None, "", "global") else v
|
||||
|
||||
@field_validator("plugin", "jobName", "fileName")
|
||||
@classmethod
|
||||
def _non_empty(cls, v: str) -> str:
|
||||
v = v.strip()
|
||||
if not v:
|
||||
raise ValueError("must be a non-empty string")
|
||||
return v
|
||||
|
||||
|
||||
class CacheFilesDeleteRequest(BaseModel):
|
||||
cache_files: List[CacheFileKey] = Field(..., min_length=1)
|
||||
|
||||
|
||||
# Jobs
|
||||
class JobItem(BaseModel):
|
||||
plugin: str
|
||||
name: Optional[str] = Field(None, description="Job name (optional; not required to trigger)")
|
||||
|
||||
@field_validator("plugin")
|
||||
@classmethod
|
||||
def _non_empty_plugin(cls, v: str) -> str:
|
||||
v = v.strip()
|
||||
if not v:
|
||||
raise ValueError("plugin must be a non-empty string")
|
||||
return v
|
||||
|
||||
|
||||
class RunJobsRequest(BaseModel):
|
||||
jobs: List[JobItem] = Field(..., min_length=1)
|
||||
|
||||
|
||||
# Global config
|
||||
Scalar = Union[str, int, float, bool, None]
|
||||
|
||||
|
||||
class GlobalConfigUpdate(RootModel[Dict[str, Scalar]]):
|
||||
@field_validator("root")
|
||||
@classmethod
|
||||
def _validate_scalars(cls, v: Dict[str, Scalar]) -> Dict[str, Scalar]:
|
||||
if not isinstance(v, dict):
|
||||
raise ValueError("Body must be a JSON object")
|
||||
for k, val in v.items():
|
||||
if isinstance(val, (dict, list)):
|
||||
raise ValueError(f"Invalid value for {k}: must be scalar")
|
||||
return v
|
||||
61
src/api/app/utils.py
Normal file
61
src/api/app/utils.py
Normal file
|
|
@ -0,0 +1,61 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
from os.path import sep
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
from bcrypt import checkpw, gensalt, hashpw
|
||||
from regex import compile as re_compile
|
||||
|
||||
from app.models.api_database import APIDatabase
|
||||
from logger import setup_logger # type: ignore
|
||||
|
||||
from Database import Database # type: ignore
|
||||
|
||||
|
||||
TMP_DIR = Path(sep, "var", "tmp", "bunkerweb")
|
||||
LIB_DIR = Path(sep, "var", "lib", "bunkerweb")
|
||||
|
||||
LOGGER = setup_logger("API")
|
||||
|
||||
# Cached singletons for pooled DB engines
|
||||
_DB_INSTANCE: Optional[Database] = None # type: ignore
|
||||
_API_DB_INSTANCE = None # Late-bound type to avoid import cycles
|
||||
|
||||
|
||||
def get_db(*, log: bool = True) -> Database:
|
||||
"""Return a shared pooled Database instance.
|
||||
|
||||
Creates it on first use; reuses the same engine/session factory afterwards.
|
||||
"""
|
||||
global _DB_INSTANCE
|
||||
if _DB_INSTANCE is None or getattr(_DB_INSTANCE, "sql_engine", None) is None: # type: ignore[attr-defined]
|
||||
from Database import Database # type: ignore
|
||||
|
||||
_DB_INSTANCE = Database(LOGGER, log=log) # type: ignore
|
||||
return _DB_INSTANCE # type: ignore
|
||||
|
||||
|
||||
def get_api_db(*, log: bool = True) -> APIDatabase:
|
||||
"""Return a shared pooled APIDatabase instance for API models."""
|
||||
global _API_DB_INSTANCE
|
||||
if _API_DB_INSTANCE is None or getattr(_API_DB_INSTANCE, "sql_engine", None) is None: # type: ignore[attr-defined]
|
||||
from .models.api_database import APIDatabase
|
||||
|
||||
_API_DB_INSTANCE = APIDatabase(LOGGER, log=log)
|
||||
return _API_DB_INSTANCE
|
||||
|
||||
|
||||
USER_PASSWORD_RX = re_compile(r"^(?=.*\p{Ll})(?=.*\p{Lu})(?=.*\d)(?=.*\P{Alnum}).{8,}$")
|
||||
PLUGIN_NAME_RX = re_compile(r"^[\w.-]{4,64}$")
|
||||
|
||||
BISCUIT_PUBLIC_KEY_FILE = LIB_DIR.joinpath(".api_biscuit_public_key")
|
||||
BISCUIT_PRIVATE_KEY_FILE = LIB_DIR.joinpath(".api_biscuit_private_key")
|
||||
|
||||
|
||||
def gen_password_hash(password: str) -> bytes:
|
||||
return hashpw(password.encode("utf-8"), gensalt(rounds=13))
|
||||
|
||||
|
||||
def check_password(password: str, hashed: bytes) -> bool:
|
||||
return checkpw(password.encode("utf-8"), hashed)
|
||||
178
src/api/app/yaml_base_settings.py
Normal file
178
src/api/app/yaml_base_settings.py
Normal file
|
|
@ -0,0 +1,178 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
# Based on:
|
||||
# https://pypi.org/project/pydantic-settings-yaml/
|
||||
|
||||
from contextlib import suppress
|
||||
from os import getenv
|
||||
from pathlib import Path
|
||||
from re import compile as re_compile
|
||||
from typing import Any, Dict, Mapping, Optional, Tuple, Type, Union
|
||||
|
||||
from pydantic.fields import FieldInfo
|
||||
from pydantic_settings import BaseSettings, DotEnvSettingsSource, EnvSettingsSource, InitSettingsSource, SecretsSettingsSource, SettingsConfigDict
|
||||
from pydantic_settings.sources import DotenvType, ENV_FILE_SENTINEL
|
||||
from yaml import safe_load
|
||||
|
||||
|
||||
class YamlSettingsConfigDict(SettingsConfigDict):
|
||||
# Keep compatibility with older custom config while aligning with upstream keys
|
||||
yaml_file: str
|
||||
yaml_file_encoding: Optional[str]
|
||||
yaml_config_section: Optional[str]
|
||||
|
||||
|
||||
def replace_secrets(secrets_dir: Path, data: str) -> str:
|
||||
"""
|
||||
Replace "<file:xxxx>" secrets in given data
|
||||
|
||||
"""
|
||||
pattern = re_compile(r"\<file\:([^>]*)\>")
|
||||
|
||||
for match in pattern.findall(data):
|
||||
relpath = Path(match)
|
||||
path = secrets_dir / relpath
|
||||
|
||||
if not path.exists():
|
||||
print(
|
||||
f"Secret file referenced in yaml file not found: {path}, settings will not be loaded from secret file.",
|
||||
flush=True,
|
||||
)
|
||||
else:
|
||||
# Replace the exact token as it appears (do not uppercase)
|
||||
data = data.replace(f"<file:{match}>", path.read_text("utf-8"))
|
||||
return data
|
||||
|
||||
|
||||
def yaml_config_settings_source(
|
||||
settings: "YamlBaseSettings",
|
||||
*,
|
||||
yaml_file: Optional[Union[str, Path]] = None,
|
||||
secrets_dir: Optional[Union[str, Path]] = None,
|
||||
yaml_file_encoding: Optional[str] = None,
|
||||
yaml_config_section: Optional[str] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""Loads settings from a YAML file at `Config.yaml_file`
|
||||
|
||||
"<file:xxxx>" patterns are replaced with the contents of file xxxx. The root path
|
||||
were to find the files is configured with `secrets_dir`.
|
||||
"""
|
||||
if yaml_file is None:
|
||||
yaml_file = settings.model_config.get("yaml_file")
|
||||
if secrets_dir is None:
|
||||
secrets_dir = settings.model_config.get("secrets_dir")
|
||||
if yaml_file_encoding is None:
|
||||
yaml_file_encoding = settings.model_config.get("yaml_file_encoding")
|
||||
if yaml_config_section is None:
|
||||
yaml_config_section = settings.model_config.get("yaml_config_section")
|
||||
|
||||
assert yaml_file, "Settings.yaml_file not properly configured"
|
||||
assert secrets_dir, "Settings.secrets_dir not properly configured"
|
||||
|
||||
path = Path(yaml_file)
|
||||
secrets_path = Path(secrets_dir)
|
||||
|
||||
if not path.exists():
|
||||
raise FileNotFoundError(f"Could not open yaml settings file at: {path}")
|
||||
|
||||
encoding = yaml_file_encoding or "utf-8"
|
||||
loaded = safe_load(replace_secrets(secrets_path, path.read_text(encoding))) or {}
|
||||
if yaml_config_section:
|
||||
# Allow nested section selection when file contains multiple configs
|
||||
with suppress(Exception):
|
||||
section = loaded.get(yaml_config_section)
|
||||
if isinstance(section, dict):
|
||||
return section
|
||||
return loaded
|
||||
|
||||
|
||||
class YamlConfigSettingsSource(DotEnvSettingsSource):
|
||||
"""
|
||||
A simple settings source class that loads variables from a YAML file
|
||||
|
||||
Note: slightly adapted version of JsonConfigSettingsSource from docs.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
settings_cls: type[BaseSettings],
|
||||
yaml_file: Optional[str] = None,
|
||||
env_file: Optional[DotenvType] = ENV_FILE_SENTINEL,
|
||||
env_file_encoding: Optional[str] = None,
|
||||
case_sensitive: Optional[bool] = None,
|
||||
env_prefix: Optional[str] = None,
|
||||
env_nested_delimiter: Optional[str] = None,
|
||||
secrets_dir: Optional[Union[str, Path]] = None,
|
||||
) -> None:
|
||||
self._yaml_data: dict = yaml_config_settings_source(
|
||||
settings_cls,
|
||||
yaml_file=yaml_file,
|
||||
secrets_dir=secrets_dir,
|
||||
yaml_file_encoding=settings_cls.model_config.get("yaml_file_encoding"), # type: ignore[attr-defined]
|
||||
yaml_config_section=settings_cls.model_config.get("yaml_config_section"), # type: ignore[attr-defined]
|
||||
) # type: ignore
|
||||
|
||||
for k, v in self._yaml_data.items():
|
||||
self._yaml_data[k.upper()] = v
|
||||
|
||||
super().__init__(settings_cls, env_file, env_file_encoding, case_sensitive, env_prefix, env_nested_delimiter) # type: ignore
|
||||
|
||||
def _load_env_vars(self) -> Mapping[str, Optional[str]]:
|
||||
return self._yaml_data
|
||||
|
||||
def get_field_value(self, field: FieldInfo, field_name: str) -> Tuple[Any, str, bool]:
|
||||
field_value = self._yaml_data.get(field_name) if self._yaml_data else None
|
||||
return field_value, field_name, False
|
||||
|
||||
def prepare_field_value(self, field_name: str, field: FieldInfo, value: Any, value_is_complex: bool) -> Any:
|
||||
return value
|
||||
|
||||
def __call__(self) -> Dict[str, Any]:
|
||||
d: Dict[str, Any] = super().__call__()
|
||||
|
||||
for field_name, field in self.settings_cls.model_fields.items():
|
||||
field_value, field_key, value_is_complex = self.get_field_value(field, field_name)
|
||||
field_value = self.prepare_field_value(field_name, field, field_value, value_is_complex)
|
||||
if field_value is not None:
|
||||
d[field_key] = field_value
|
||||
|
||||
return d
|
||||
|
||||
|
||||
class YamlBaseSettings(BaseSettings):
|
||||
"""Extend BaseSettings to also read from a YAML file.
|
||||
|
||||
Precedence (high → low): init kwargs, env, secrets, YAML, dotenv, defaults.
|
||||
"""
|
||||
|
||||
@classmethod
|
||||
def settings_customise_sources(
|
||||
cls,
|
||||
settings_cls: Type[BaseSettings],
|
||||
init_settings: InitSettingsSource,
|
||||
env_settings: EnvSettingsSource,
|
||||
dotenv_settings: DotEnvSettingsSource,
|
||||
file_secret_settings: SecretsSettingsSource,
|
||||
):
|
||||
"""Insert YAML source between env and dotenv with values from model_config."""
|
||||
yaml_source = YamlConfigSettingsSource(
|
||||
settings_cls,
|
||||
yaml_file=settings_cls.model_config.get("yaml_file"), # type: ignore[attr-defined]
|
||||
env_file=settings_cls.model_config.get("env_file"), # type: ignore[attr-defined]
|
||||
env_file_encoding=settings_cls.model_config.get("env_file_encoding"), # type: ignore[attr-defined]
|
||||
case_sensitive=settings_cls.model_config.get("case_sensitive"), # type: ignore[attr-defined]
|
||||
env_prefix=settings_cls.model_config.get("env_prefix"), # type: ignore[attr-defined]
|
||||
env_nested_delimiter=settings_cls.model_config.get("env_nested_delimiter"), # type: ignore[attr-defined]
|
||||
secrets_dir=settings_cls.model_config.get("secrets_dir"), # type: ignore[attr-defined]
|
||||
)
|
||||
|
||||
# Keep project logic: env → secrets → YAML → dotenv (then defaults)
|
||||
return (init_settings, env_settings, file_secret_settings, yaml_source, dotenv_settings)
|
||||
|
||||
# Baseline defaults; models can override via their own model_config
|
||||
model_config = SettingsConfigDict(
|
||||
secrets_dir=getenv("SETTINGS_SECRETS_DIR", "/etc/secrets"),
|
||||
yaml_file=getenv("SETTINGS_YAML_FILE", "/etc/bunkerweb/config.yml"),
|
||||
) # type: ignore
|
||||
|
||||
|
||||
__ALL__ = (YamlBaseSettings, YamlSettingsConfigDict)
|
||||
10
src/api/conf/redis.conf
Normal file
10
src/api/conf/redis.conf
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
bind 127.0.0.1
|
||||
daemonize no
|
||||
supervised no
|
||||
logfile ""
|
||||
databases 1
|
||||
save 900 1
|
||||
save 300 10
|
||||
save 60 10000
|
||||
dir /var/lib/redis/
|
||||
appendonly yes
|
||||
93
src/api/entrypoint.sh
Normal file
93
src/api/entrypoint.sh
Normal file
|
|
@ -0,0 +1,93 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Enforce a restrictive default umask for all operations
|
||||
umask 027
|
||||
|
||||
# Load utility functions from a shared helper script.
|
||||
# shellcheck disable=SC1091
|
||||
. /usr/share/bunkerweb/helpers/utils.sh
|
||||
|
||||
# Define a function to handle SIGTERM and SIGINT signals.
|
||||
function trap_exit() {
|
||||
# Log that the script caught a termination signal.
|
||||
# shellcheck disable=SC2317
|
||||
log "ENTRYPOINT" "ℹ️ " "Caught stop operation"
|
||||
|
||||
# Stop the API process if it exists.
|
||||
# shellcheck disable=SC2317
|
||||
if [ -f "/var/run/bunkerweb/api.pid" ]; then
|
||||
# shellcheck disable=SC2317
|
||||
log "ENTRYPOINT" "ℹ️ " "Stopping API ..."
|
||||
|
||||
# Verify the API process is running before stopping it.
|
||||
# shellcheck disable=SC2317
|
||||
if kill -0 "$(cat /var/run/bunkerweb/api.pid)" 2> /dev/null; then
|
||||
# Send a TERM signal to stop the API.
|
||||
# shellcheck disable=SC2317
|
||||
kill -s TERM "$(cat /var/run/bunkerweb/api.pid)"
|
||||
fi
|
||||
|
||||
# Log that the API process has been stopped.
|
||||
# shellcheck disable=SC2317
|
||||
log "ENTRYPOINT" "ℹ️ " "API stopped"
|
||||
fi
|
||||
}
|
||||
|
||||
# Register the trap_exit function to handle SIGTERM, SIGINT, and SIGQUIT signals.
|
||||
trap "trap_exit" TERM INT QUIT
|
||||
|
||||
# Remove any existing PID file for the API to avoid stale state issues.
|
||||
if [ -f /var/run/bunkerweb/api.pid ]; then
|
||||
rm -f /var/run/bunkerweb/api.pid
|
||||
fi
|
||||
|
||||
# Log the startup of the API, including the version being launched.
|
||||
log "ENTRYPOINT" "ℹ️" "Starting the API v$(cat /usr/share/bunkerweb/VERSION) ..."
|
||||
|
||||
# Set up and validate the /data folder, ensuring required configurations are present.
|
||||
/usr/share/bunkerweb/helpers/data.sh "ENTRYPOINT"
|
||||
|
||||
handle_docker_secrets
|
||||
|
||||
# Determine the deployment mode (Swarm, Kubernetes, Autoconf, or Docker) and record it.
|
||||
if [[ $(echo "$SWARM_MODE" | awk '{print tolower($0)}') == "yes" ]]; then
|
||||
echo "Swarm" > /usr/share/bunkerweb/INTEGRATION
|
||||
elif [[ $(echo "$KUBERNETES_MODE" | awk '{print tolower($0)}') == "yes" ]]; then
|
||||
echo "Kubernetes" > /usr/share/bunkerweb/INTEGRATION
|
||||
elif [[ $(echo "$AUTOCONF_MODE" | awk '{print tolower($0)}') == "yes" ]]; then
|
||||
echo "Autoconf" > /usr/share/bunkerweb/INTEGRATION
|
||||
else
|
||||
echo "Docker" > /usr/share/bunkerweb/INTEGRATION
|
||||
fi
|
||||
|
||||
# Start the main Gunicorn process with the standard logger configuration.
|
||||
# Normalize TLS cipher envs for consistency (support unprefixed and legacy names)
|
||||
if [[ -n "${SSL_CIPHERS_CUSTOM}" && -z "${API_SSL_CIPHERS_CUSTOM}" ]]; then
|
||||
export API_SSL_CIPHERS_CUSTOM="${SSL_CIPHERS_CUSTOM}"
|
||||
fi
|
||||
if [[ -n "${API_SSL_CIPHERS}" && -z "${API_SSL_CIPHERS_CUSTOM}" ]]; then
|
||||
export API_SSL_CIPHERS_CUSTOM="${API_SSL_CIPHERS}"
|
||||
fi
|
||||
if [[ -n "${SSL_CIPHERS}" && -z "${API_SSL_CIPHERS_CUSTOM}" ]]; then
|
||||
export API_SSL_CIPHERS_CUSTOM="${SSL_CIPHERS}"
|
||||
fi
|
||||
if [[ -z "${API_SSL_CIPHERS_LEVEL}" && -n "${SSL_CIPHERS_LEVEL}" ]]; then
|
||||
export API_SSL_CIPHERS_LEVEL="${SSL_CIPHERS_LEVEL}"
|
||||
fi
|
||||
|
||||
if [ ! -f /etc/bunkerweb/api.yml ]; then
|
||||
touch /etc/bunkerweb/api.yml
|
||||
fi
|
||||
|
||||
python3 -m gunicorn --logger-class utils.logger.APILogger --config utils/gunicorn.conf.py &
|
||||
pid="$!"
|
||||
|
||||
# Wait for the main API process to exit and capture its exit code.
|
||||
wait "$pid"
|
||||
exit_code=$?
|
||||
|
||||
# Log the exit status of the main API process for debugging purposes.
|
||||
log "ENTRYPOINT" "ℹ️" "API stopped with exit code $exit_code"
|
||||
|
||||
# Exit the script with the same exit code as the main API process.
|
||||
exit $exit_code
|
||||
12
src/api/requirements.in
Normal file
12
src/api/requirements.in
Normal file
|
|
@ -0,0 +1,12 @@
|
|||
bcrypt==4.3.0
|
||||
biscuit-python==0.3.2
|
||||
cryptography==45.0.7
|
||||
fastapi==0.116.1
|
||||
gunicorn==23.0.0
|
||||
pydantic==2.11.9
|
||||
pydantic_settings==2.10.1
|
||||
python-multipart==0.0.20
|
||||
PyYAML==6.0.2
|
||||
regex==2025.9.1
|
||||
slowapi==0.1.9
|
||||
uvicorn[standard]==0.35.0
|
||||
1043
src/api/requirements.txt
Normal file
1043
src/api/requirements.txt
Normal file
File diff suppressed because it is too large
Load diff
0
src/api/utils/__init__.py
Normal file
0
src/api/utils/__init__.py
Normal file
518
src/api/utils/gunicorn.conf.py
Normal file
518
src/api/utils/gunicorn.conf.py
Normal file
|
|
@ -0,0 +1,518 @@
|
|||
from datetime import datetime
|
||||
from hashlib import sha256
|
||||
from json import JSONDecodeError, dump, loads
|
||||
from os import cpu_count, environ, getenv, sep
|
||||
from os.path import join
|
||||
from pathlib import Path
|
||||
from secrets import token_hex
|
||||
from stat import S_IRUSR, S_IWUSR
|
||||
from sys import exit, path as sys_path
|
||||
from time import sleep
|
||||
from traceback import format_exc
|
||||
|
||||
for deps_path in [join(sep, "usr", "share", "bunkerweb", *paths) for paths in (("deps", "python"), ("utils",), ("api",), ("db",))]:
|
||||
if deps_path not in sys_path:
|
||||
sys_path.append(deps_path)
|
||||
|
||||
from biscuit_auth import KeyPair, PublicKey, PrivateKey
|
||||
|
||||
from common_utils import handle_docker_secrets # type: ignore
|
||||
from logger import setup_logger # type: ignore
|
||||
|
||||
from app.models.api_database import APIDatabase
|
||||
from app.utils import BISCUIT_PRIVATE_KEY_FILE, BISCUIT_PUBLIC_KEY_FILE, USER_PASSWORD_RX, check_password, gen_password_hash
|
||||
|
||||
TMP_DIR = Path(sep, "var", "tmp", "bunkerweb")
|
||||
TMP_UI_DIR = TMP_DIR.joinpath("api")
|
||||
RUN_DIR = Path(sep, "var", "run", "bunkerweb")
|
||||
LIB_DIR = Path(sep, "var", "lib", "bunkerweb")
|
||||
|
||||
# ACL cache (file-backed like a tiny redis)
|
||||
API_ACL_FILE = LIB_DIR.joinpath("api_acl.json")
|
||||
|
||||
HEALTH_FILE = TMP_DIR.joinpath("api.healthy")
|
||||
ERROR_FILE = TMP_DIR.joinpath("api.error")
|
||||
|
||||
PID_FILE = RUN_DIR.joinpath("api.pid")
|
||||
|
||||
BISCUIT_PUBLIC_KEY_HASH_FILE = BISCUIT_PUBLIC_KEY_FILE.with_suffix(".hash") # File to store hash of Biscuit public key
|
||||
BISCUIT_PRIVATE_KEY_HASH_FILE = BISCUIT_PRIVATE_KEY_FILE.with_suffix(".hash") # File to store hash of Biscuit private key
|
||||
|
||||
MAX_WORKERS = int(getenv("MAX_WORKERS", max((cpu_count() or 1) - 1, 1)))
|
||||
LOG_LEVEL = getenv("CUSTOM_LOG_LEVEL", getenv("LOG_LEVEL", "info"))
|
||||
LISTEN_ADDR = getenv("API_LISTEN_ADDR", getenv("LISTEN_ADDR", "0.0.0.0"))
|
||||
LISTEN_PORT = getenv("API_LISTEN_PORT", getenv("LISTEN_PORT", "8888"))
|
||||
"""
|
||||
Trusted proxies / forwarded headers
|
||||
|
||||
Default to trusting only the local machine (127.0.0.1). This assumes there is
|
||||
no load balancer or WAF in front of the API. Operators can override via
|
||||
API_FORWARDED_ALLOW_IPS or FORWARDED_ALLOW_IPS.
|
||||
"""
|
||||
FORWARDED_ALLOW_IPS = getenv("API_FORWARDED_ALLOW_IPS", getenv("FORWARDED_ALLOW_IPS", "127.0.0.1"))
|
||||
|
||||
"""
|
||||
TLS/SSL support
|
||||
|
||||
Enable TLS by setting API_SSL_ENABLED to yes/true/1 and providing
|
||||
API_SSL_CERTFILE and API_SSL_KEYFILE. Optional: API_SSL_CA_CERTS,
|
||||
"""
|
||||
API_SSL_ENABLED = getenv("API_SSL_ENABLED", getenv("SSL_ENABLED", "no")).lower() in ("1", "true", "yes", "on")
|
||||
API_SSL_CERTFILE = getenv("API_SSL_CERTFILE", getenv("SSL_CERTFILE", ""))
|
||||
API_SSL_KEYFILE = getenv("API_SSL_KEYFILE", getenv("SSL_KEYFILE", ""))
|
||||
API_SSL_CA_CERTS = getenv("API_SSL_CA_CERTS", getenv("SSL_CA_CERTS", ""))
|
||||
|
||||
CAPTURE_OUTPUT = getenv("CAPTURE_OUTPUT", "no").lower() == "yes"
|
||||
|
||||
wsgi_app = "app.main:app"
|
||||
proc_name = "bunkerweb-api"
|
||||
accesslog = join(sep, "var", "log", "bunkerweb", "api-access.log") if CAPTURE_OUTPUT else "-"
|
||||
access_log_format = '%({x-forwarded-for}i)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"'
|
||||
errorlog = join(sep, "var", "log", "bunkerweb", "api.log") if CAPTURE_OUTPUT else "-"
|
||||
capture_output = CAPTURE_OUTPUT
|
||||
limit_request_line = 0
|
||||
limit_request_fields = 32768
|
||||
limit_request_field_size = 0
|
||||
reuse_port = True
|
||||
daemon = False
|
||||
chdir = join(sep, "usr", "share", "bunkerweb", "api")
|
||||
umask = 0x027
|
||||
pidfile = PID_FILE.as_posix()
|
||||
worker_tmp_dir = join(sep, "dev", "shm")
|
||||
tmp_upload_dir = TMP_UI_DIR.as_posix()
|
||||
secure_scheme_headers = {}
|
||||
forwarded_allow_ips = FORWARDED_ALLOW_IPS
|
||||
pythonpath = join(sep, "usr", "share", "bunkerweb", "deps", "python") + "," + join(sep, "usr", "share", "bunkerweb", "api")
|
||||
proxy_allow_ips = FORWARDED_ALLOW_IPS
|
||||
casefold_http_method = True
|
||||
workers = MAX_WORKERS
|
||||
bind = f"{LISTEN_ADDR}:{LISTEN_PORT}"
|
||||
worker_class = "utils.worker.ApiUvicornWorker"
|
||||
threads = int(getenv("MAX_THREADS", MAX_WORKERS * 2))
|
||||
max_requests_jitter = min(8, MAX_WORKERS)
|
||||
graceful_timeout = 30
|
||||
|
||||
DEBUG = getenv("DEBUG", False)
|
||||
|
||||
loglevel = "debug" if DEBUG else LOG_LEVEL.lower()
|
||||
|
||||
if DEBUG:
|
||||
reload = True
|
||||
reload_extra_files = [
|
||||
file.as_posix()
|
||||
for file in Path(sep, "usr", "share", "bunkerweb", "api", "app").rglob("*")
|
||||
if "__pycache__" not in file.parts and "static" not in file.parts
|
||||
]
|
||||
|
||||
# Configure TLS when enabled and files provided
|
||||
if API_SSL_ENABLED and API_SSL_CERTFILE and API_SSL_KEYFILE:
|
||||
certfile = API_SSL_CERTFILE
|
||||
keyfile = API_SSL_KEYFILE
|
||||
if API_SSL_CA_CERTS:
|
||||
ca_certs = API_SSL_CA_CERTS
|
||||
|
||||
|
||||
def on_starting(server):
|
||||
TMP_DIR.mkdir(parents=True, exist_ok=True)
|
||||
TMP_UI_DIR.mkdir(parents=True, exist_ok=True)
|
||||
RUN_DIR.mkdir(parents=True, exist_ok=True)
|
||||
LIB_DIR.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
ERROR_FILE.unlink(missing_ok=True)
|
||||
|
||||
# Handle Docker secrets first
|
||||
docker_secrets = handle_docker_secrets()
|
||||
if docker_secrets:
|
||||
environ.update(docker_secrets)
|
||||
|
||||
LOGGER = setup_logger("API", getenv("CUSTOM_LOG_LEVEL", getenv("LOG_LEVEL", "INFO")))
|
||||
|
||||
if docker_secrets:
|
||||
LOGGER.info(f"Loaded {len(docker_secrets)} Docker secrets")
|
||||
|
||||
def set_secure_permissions(file_path: Path):
|
||||
"""Set file permissions to 600 (owner read/write only)."""
|
||||
file_path.chmod(S_IRUSR | S_IWUSR)
|
||||
|
||||
# * Handle Biscuit keys
|
||||
try:
|
||||
biscuit_public_key_hex = None
|
||||
biscuit_private_key_hex = None
|
||||
keys_loaded = False
|
||||
keys_generated = False
|
||||
|
||||
# * Step 1: Load Biscuit keys from files and validate
|
||||
if BISCUIT_PUBLIC_KEY_FILE.is_file() and BISCUIT_PRIVATE_KEY_FILE.is_file():
|
||||
try:
|
||||
pub_hex = BISCUIT_PUBLIC_KEY_FILE.read_text(encoding="utf-8").strip()
|
||||
priv_hex = BISCUIT_PRIVATE_KEY_FILE.read_text(encoding="utf-8").strip()
|
||||
if not pub_hex or not priv_hex:
|
||||
raise ValueError("One or both Biscuit key files are empty.")
|
||||
|
||||
# Validate by attempting to load
|
||||
PublicKey.from_hex(pub_hex)
|
||||
PrivateKey.from_hex(priv_hex)
|
||||
biscuit_public_key_hex = pub_hex
|
||||
biscuit_private_key_hex = priv_hex
|
||||
keys_loaded = True
|
||||
LOGGER.info("Valid Biscuit keys successfully loaded from files.")
|
||||
except Exception as e:
|
||||
LOGGER.error(f"Failed to load or validate Biscuit keys from files: {e}. Falling back.")
|
||||
biscuit_public_key_hex = None # Ensure reset if loading failed
|
||||
biscuit_private_key_hex = None
|
||||
|
||||
# * Step 2: Check environment variables if no valid files loaded
|
||||
if not keys_loaded:
|
||||
pub_hex_env = getenv("BISCUIT_PUBLIC_KEY", "").strip()
|
||||
priv_hex_env = getenv("BISCUIT_PRIVATE_KEY", "").strip()
|
||||
if pub_hex_env and priv_hex_env:
|
||||
try:
|
||||
# Validate by attempting to load
|
||||
PublicKey.from_hex(pub_hex_env)
|
||||
PrivateKey.from_hex(priv_hex_env)
|
||||
biscuit_public_key_hex = pub_hex_env
|
||||
biscuit_private_key_hex = priv_hex_env
|
||||
keys_loaded = True
|
||||
LOGGER.info("Valid Biscuit keys successfully loaded from environment variables.")
|
||||
except Exception as e:
|
||||
LOGGER.error(f"Failed to validate Biscuit keys from environment variables: {e}. Falling back.")
|
||||
biscuit_public_key_hex = None # Ensure reset if env validation failed
|
||||
biscuit_private_key_hex = None
|
||||
|
||||
# * Step 3: Generate new keys if none found or loaded/validated successfully
|
||||
if not keys_loaded:
|
||||
LOGGER.warning("No valid Biscuit keys found from files or environment. Generating new random keys...")
|
||||
# Generate new keys using the biscuit library
|
||||
keypair = KeyPair()
|
||||
biscuit_private_key_obj = keypair.private_key
|
||||
biscuit_public_key_obj = keypair.public_key
|
||||
biscuit_private_key_hex = biscuit_private_key_obj.to_hex()
|
||||
biscuit_public_key_hex = biscuit_public_key_obj.to_hex()
|
||||
keys_generated = True
|
||||
LOGGER.info("Generated new Biscuit key pair.")
|
||||
|
||||
# Ensure we have keys before proceeding
|
||||
if not biscuit_public_key_hex or not biscuit_private_key_hex:
|
||||
raise RuntimeError("Failed to load or generate required Biscuit keys.")
|
||||
|
||||
# * Step 4: Hash for change detection
|
||||
current_public_key_hash = sha256(biscuit_public_key_hex.encode("utf-8")).hexdigest()
|
||||
current_private_key_hash = sha256(biscuit_private_key_hex.encode("utf-8")).hexdigest()
|
||||
previous_public_key_hash = BISCUIT_PUBLIC_KEY_HASH_FILE.read_text(encoding="utf-8").strip() if BISCUIT_PUBLIC_KEY_HASH_FILE.is_file() else None
|
||||
previous_private_key_hash = BISCUIT_PRIVATE_KEY_HASH_FILE.read_text(encoding="utf-8").strip() if BISCUIT_PRIVATE_KEY_HASH_FILE.is_file() else None
|
||||
|
||||
# * Step 5: Compare hashes and update if necessary
|
||||
public_key_changed = previous_public_key_hash is None or current_public_key_hash != previous_public_key_hash
|
||||
private_key_changed = previous_private_key_hash is None or current_private_key_hash != previous_private_key_hash
|
||||
|
||||
if public_key_changed or private_key_changed or keys_generated:
|
||||
if keys_generated:
|
||||
LOGGER.warning("Saving newly generated Biscuit keys.")
|
||||
else:
|
||||
LOGGER.warning("The Biscuit keys have changed or are being set for the first time.")
|
||||
|
||||
# Update public key file and hash
|
||||
with BISCUIT_PUBLIC_KEY_FILE.open("w", encoding="utf-8") as file:
|
||||
file.write(biscuit_public_key_hex)
|
||||
set_secure_permissions(BISCUIT_PUBLIC_KEY_FILE)
|
||||
|
||||
with BISCUIT_PUBLIC_KEY_HASH_FILE.open("w", encoding="utf-8") as file:
|
||||
file.write(current_public_key_hash)
|
||||
set_secure_permissions(BISCUIT_PUBLIC_KEY_HASH_FILE)
|
||||
|
||||
# Update private key file and hash
|
||||
with BISCUIT_PRIVATE_KEY_FILE.open("w", encoding="utf-8") as file:
|
||||
file.write(biscuit_private_key_hex)
|
||||
set_secure_permissions(BISCUIT_PRIVATE_KEY_FILE)
|
||||
|
||||
with BISCUIT_PRIVATE_KEY_HASH_FILE.open("w", encoding="utf-8") as file:
|
||||
file.write(current_private_key_hash)
|
||||
set_secure_permissions(BISCUIT_PRIVATE_KEY_HASH_FILE)
|
||||
else:
|
||||
LOGGER.info("The Biscuit keys have not changed since the last restart.")
|
||||
|
||||
LOGGER.info("Biscuit keys securely stored.")
|
||||
LOGGER.info("Biscuit key hashes securely stored for change detection.")
|
||||
except Exception as e:
|
||||
message = f"An error occurred while handling Biscuit keys: {e}"
|
||||
LOGGER.debug(format_exc())
|
||||
LOGGER.critical(message)
|
||||
ERROR_FILE.write_text(message, encoding="utf-8")
|
||||
exit(1)
|
||||
|
||||
DB = APIDatabase(LOGGER)
|
||||
current_time = datetime.now().astimezone()
|
||||
|
||||
ready = False
|
||||
while not ready:
|
||||
if (datetime.now().astimezone() - current_time).total_seconds() > 60:
|
||||
message = "Timed out while waiting for the database to be initialized."
|
||||
LOGGER.error(message)
|
||||
ERROR_FILE.write_text(message, encoding="utf-8")
|
||||
exit(1)
|
||||
|
||||
db_metadata = DB.get_metadata()
|
||||
if isinstance(db_metadata, str) or not db_metadata["is_initialized"]:
|
||||
LOGGER.warning("Database is not initialized, retrying in 5s ...")
|
||||
else:
|
||||
ready = True
|
||||
continue
|
||||
sleep(5)
|
||||
|
||||
current_time = datetime.now().astimezone()
|
||||
|
||||
API_USER = "Error"
|
||||
while API_USER == "Error":
|
||||
if (datetime.now().astimezone() - current_time).total_seconds() > 60:
|
||||
message = "Timed out while waiting for the api user."
|
||||
LOGGER.error(message)
|
||||
ERROR_FILE.write_text(message, encoding="utf-8")
|
||||
exit(1)
|
||||
|
||||
try:
|
||||
API_USER = DB.get_api_user(as_dict=True)
|
||||
except BaseException as e:
|
||||
LOGGER.debug(f"Couldn't get the api user: {e}")
|
||||
sleep(1)
|
||||
|
||||
env_api_username = getenv("API_USERNAME", "")
|
||||
env_api_password = getenv("API_PASSWORD", "")
|
||||
|
||||
if API_USER:
|
||||
# Note: UI TOTP keys logic does not apply to API users
|
||||
|
||||
if env_api_username or env_api_password:
|
||||
override_api_creds = getenv("OVERRIDE_API_CREDS", "no").lower() == "yes"
|
||||
if API_USER["method"] == "manual" or override_api_creds:
|
||||
updated = False
|
||||
if env_api_username and API_USER["username"] != env_api_username:
|
||||
API_USER["username"] = env_api_username
|
||||
updated = True
|
||||
|
||||
if env_api_password and not check_password(env_api_password, API_USER["password"]):
|
||||
if not USER_PASSWORD_RX.match(env_api_password):
|
||||
LOGGER.warning(
|
||||
"The api password is not strong enough. It must contain at least 8 characters, including at least 1 uppercase letter, 1 lowercase letter, 1 number and 1 special character. It will not be updated."
|
||||
)
|
||||
else:
|
||||
API_USER["password"] = gen_password_hash(env_api_password)
|
||||
updated = True
|
||||
|
||||
if updated:
|
||||
if override_api_creds:
|
||||
LOGGER.warning("Overriding the api user credentials, as the OVERRIDE_api_CREDS environment variable is set to 'yes'.")
|
||||
err = DB.update_api_user(
|
||||
API_USER["username"],
|
||||
API_USER["password"],
|
||||
method="manual",
|
||||
)
|
||||
if err:
|
||||
LOGGER.error(f"Couldn't update the api user in the database: {err}")
|
||||
else:
|
||||
LOGGER.info("The api user was updated successfully.")
|
||||
else:
|
||||
LOGGER.warning("The api user wasn't created manually. You can't change it from the environment variables.")
|
||||
elif env_api_username and env_api_password:
|
||||
user_name = env_api_username
|
||||
|
||||
if not DEBUG:
|
||||
if len(user_name) > 256:
|
||||
message = "The api username is too long. It must be less than 256 characters."
|
||||
LOGGER.error(message)
|
||||
ERROR_FILE.write_text(message, encoding="utf-8")
|
||||
exit(1)
|
||||
elif not USER_PASSWORD_RX.match(env_api_password):
|
||||
message = "The api password is not strong enough. It must contain at least 8 characters, including at least 1 uppercase letter, 1 lowercase letter, 1 number and 1 special character."
|
||||
LOGGER.error(message)
|
||||
ERROR_FILE.write_text(message, encoding="utf-8")
|
||||
exit(1)
|
||||
|
||||
ret = DB.create_api_user(user_name, gen_password_hash(env_api_password), admin=True, method="manual")
|
||||
if ret and "already exists" not in ret:
|
||||
message = f"Couldn't create the api user in the database: {ret}"
|
||||
LOGGER.critical(message)
|
||||
ERROR_FILE.write_text(message, encoding="utf-8")
|
||||
exit(1)
|
||||
|
||||
# Build ACL cache file (redis-like JSON) for API users
|
||||
try:
|
||||
acl_data = {
|
||||
"generated_at": datetime.now().astimezone().isoformat(),
|
||||
"users": {},
|
||||
}
|
||||
|
||||
# Collect all API users and their permissions using model helper
|
||||
all_users = DB.list_api_users()
|
||||
|
||||
for uname, is_admin in all_users:
|
||||
if is_admin:
|
||||
acl_data["users"][uname] = {
|
||||
"admin": True,
|
||||
"permissions": {},
|
||||
}
|
||||
else:
|
||||
# as_dict=True to get nested mapping resource_type -> resource_id|* -> {perm: True}
|
||||
perms = DB.get_api_permissions(uname, as_dict=True)
|
||||
acl_data["users"][uname] = {
|
||||
"admin": False,
|
||||
"permissions": perms,
|
||||
}
|
||||
|
||||
# Persist to disk with 600 perms
|
||||
with API_ACL_FILE.open("w", encoding="utf-8") as f:
|
||||
dump(acl_data, f)
|
||||
set_secure_permissions(API_ACL_FILE)
|
||||
LOGGER.info(f"ACL cache written to {API_ACL_FILE}")
|
||||
except Exception as e:
|
||||
LOGGER.error(f"Failed to build ACL cache: {e}")
|
||||
|
||||
# Optional: bootstrap API users and permissions from a JSON file
|
||||
# Env: API_ACL_BOOTSTRAP_FILE (defaults to /var/lib/bunkerweb/api_acl_bootstrap.json if present)
|
||||
try:
|
||||
bootstrap_path = (getenv("API_ACL_BOOTSTRAP_FILE", "") or "").strip()
|
||||
default_bootstrap = LIB_DIR.joinpath("api_acl_bootstrap.json")
|
||||
if not bootstrap_path and default_bootstrap.is_file():
|
||||
bootstrap_path = default_bootstrap.as_posix()
|
||||
|
||||
if bootstrap_path:
|
||||
bp = Path(bootstrap_path)
|
||||
if not bp.is_file():
|
||||
LOGGER.warning(f"API_ACL_BOOTSTRAP_FILE set but not found: {bootstrap_path}")
|
||||
else:
|
||||
LOGGER.info(f"Bootstrapping API users/permissions from {bootstrap_path}")
|
||||
raw = bp.read_text(encoding="utf-8")
|
||||
data = loads(raw)
|
||||
|
||||
users_obj = data.get("users", [])
|
||||
# Allow both dict and list forms
|
||||
if isinstance(users_obj, dict):
|
||||
iterable = [(uname, udata) for uname, udata in users_obj.items()]
|
||||
elif isinstance(users_obj, list):
|
||||
iterable = []
|
||||
for entry in users_obj:
|
||||
if isinstance(entry, dict) and "username" in entry:
|
||||
uname = str(entry.get("username", "")).strip()
|
||||
if uname:
|
||||
iterable.append((uname, entry))
|
||||
else:
|
||||
iterable = []
|
||||
|
||||
# Helper: check if user exists and admin flag
|
||||
def _get_user(username: str):
|
||||
try:
|
||||
u = DB.get_api_user(username=username) # type: ignore[arg-type]
|
||||
return u
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
for uname, udata in iterable:
|
||||
if not uname:
|
||||
continue
|
||||
admin_flag = bool(udata.get("admin", False)) if isinstance(udata, dict) else False
|
||||
pwd_hash = None
|
||||
if isinstance(udata, dict):
|
||||
# Accept either plaintext password or bcrypt hash
|
||||
plain = udata.get("password")
|
||||
hashed = udata.get("password_hash") or udata.get("password_bcrypt")
|
||||
if isinstance(hashed, str) and hashed:
|
||||
pwd_hash = hashed.encode("utf-8")
|
||||
elif isinstance(plain, str) and plain:
|
||||
if not DEBUG and not USER_PASSWORD_RX.match(plain):
|
||||
LOGGER.warning(f"Skipping weak password for user {uname}; generating a random one instead")
|
||||
plain = token_hex(24)
|
||||
pwd_hash = gen_password_hash(plain)
|
||||
|
||||
existing = _get_user(uname)
|
||||
if existing:
|
||||
# Update password if provided
|
||||
if pwd_hash is not None:
|
||||
err = DB.update_api_user(uname, pwd_hash, method="manual")
|
||||
if err:
|
||||
LOGGER.error(f"Couldn't update API user {uname}: {err}")
|
||||
else:
|
||||
# No user -> create
|
||||
if pwd_hash is None:
|
||||
# Generate a secure random password when not provided
|
||||
rand_pwd = token_hex(24)
|
||||
pwd_hash = gen_password_hash(rand_pwd)
|
||||
LOGGER.warning(f"No password provided for {uname}; generated a random one. Please rotate it.")
|
||||
|
||||
ret = DB.create_api_user(uname, pwd_hash, admin=admin_flag, method="manual")
|
||||
if ret:
|
||||
if admin_flag and "An admin user already exists" in ret:
|
||||
LOGGER.warning(f"Admin user already exists; creating {uname} as non-admin")
|
||||
ret2 = DB.create_api_user(uname, pwd_hash, admin=False, method="manual")
|
||||
if ret2:
|
||||
LOGGER.error(f"Couldn't create API user {uname}: {ret2}")
|
||||
elif "already exists" in ret:
|
||||
LOGGER.info(f"User {uname} already exists; skipping creation")
|
||||
else:
|
||||
LOGGER.error(f"Couldn't create API user {uname}: {ret}")
|
||||
|
||||
# Apply permissions if provided
|
||||
perms = {}
|
||||
if isinstance(udata, dict):
|
||||
perms = udata.get("permissions", {}) or {}
|
||||
|
||||
try:
|
||||
if isinstance(perms, dict):
|
||||
for rtype, rid_map in perms.items():
|
||||
if not isinstance(rtype, str):
|
||||
continue
|
||||
if isinstance(rid_map, dict):
|
||||
for rid, perm_map in rid_map.items():
|
||||
# Accept "*" or null as global
|
||||
rid_norm = None if rid in (None, "*", "") else rid
|
||||
if isinstance(perm_map, dict):
|
||||
for pname, granted in perm_map.items():
|
||||
if not isinstance(pname, str):
|
||||
continue
|
||||
if bool(granted):
|
||||
err = DB.grant_api_permission(uname, pname, resource_type=rtype, resource_id=rid_norm, granted=True)
|
||||
if err:
|
||||
LOGGER.warning(f"Couldn't grant {uname}:{pname} on {rtype}/{rid_norm or '*'}: {err}")
|
||||
else:
|
||||
LOGGER.warning(f"Invalid permissions format for user {uname} at resource_type={rtype}")
|
||||
elif isinstance(perms, list):
|
||||
for p in perms:
|
||||
if not isinstance(p, dict):
|
||||
continue
|
||||
pname = p.get("permission")
|
||||
rtype = p.get("resource_type")
|
||||
rid = p.get("resource_id")
|
||||
granted = bool(p.get("granted", True))
|
||||
if isinstance(pname, str) and isinstance(rtype, str) and granted:
|
||||
err = DB.grant_api_permission(uname, pname, resource_type=rtype, resource_id=rid, granted=True)
|
||||
if err:
|
||||
LOGGER.warning(f"Couldn't grant {uname}:{pname} on {rtype}/{rid or '*'}: {err}")
|
||||
except Exception as e:
|
||||
LOGGER.error(f"Failed to apply permissions for {uname}: {e}")
|
||||
except JSONDecodeError as e:
|
||||
LOGGER.error(f"Invalid JSON in API ACL bootstrap file: {e}")
|
||||
except Exception as e:
|
||||
LOGGER.error(f"Error while bootstrapping API ACL: {e}")
|
||||
|
||||
# Safety check: ensure at least one auth path is configured
|
||||
try:
|
||||
admin_exists = bool(DB.get_api_user(as_dict=True))
|
||||
except Exception:
|
||||
admin_exists = False
|
||||
|
||||
api_token_present = bool(getenv("API_TOKEN", "").strip())
|
||||
|
||||
if not (keys_loaded or admin_exists or api_token_present):
|
||||
message = "No authentication configured: no Biscuit keys provided (ACL), no admin API user, and no API_TOKEN. Exiting."
|
||||
LOGGER.critical(message)
|
||||
ERROR_FILE.write_text(message, encoding="utf-8")
|
||||
exit(1)
|
||||
|
||||
LOGGER.info("API is ready")
|
||||
|
||||
|
||||
def when_ready(server):
|
||||
HEALTH_FILE.write_text("ok", encoding="utf-8")
|
||||
|
||||
|
||||
def on_exit(server):
|
||||
HEALTH_FILE.unlink(missing_ok=True)
|
||||
API_ACL_FILE.unlink(missing_ok=True)
|
||||
79
src/api/utils/logger.py
Normal file
79
src/api/utils/logger.py
Normal file
|
|
@ -0,0 +1,79 @@
|
|||
from contextlib import suppress
|
||||
from logging import Formatter, getLogger
|
||||
from os.path import join, sep
|
||||
from sys import path as sys_path
|
||||
from threading import Lock
|
||||
|
||||
for deps_path in [join(sep, "usr", "share", "bunkerweb", *paths) for paths in (("deps", "python"), ("utils",), ("api",), ("db",))]:
|
||||
if deps_path not in sys_path:
|
||||
sys_path.append(deps_path)
|
||||
|
||||
from gunicorn.glogging import Logger
|
||||
|
||||
from logger import DATE_FORMAT, LOG_FORMAT, setup_logger # type: ignore
|
||||
|
||||
|
||||
class APILogger(Logger):
|
||||
|
||||
error_log = None
|
||||
access_log = None
|
||||
|
||||
error_fmt = LOG_FORMAT
|
||||
datefmt = DATE_FORMAT
|
||||
|
||||
def __init__(self, cfg):
|
||||
if not self.error_log:
|
||||
# Use common utils setup to fetch level from CUSTOM_LOG_LEVEL/LOG_LEVEL
|
||||
self.error_log = setup_logger("API") # type: ignore
|
||||
self.error_log.propagate = False
|
||||
if not self.access_log:
|
||||
self.access_log = setup_logger("API.access") # type: ignore
|
||||
self.access_log.propagate = False
|
||||
self.error_handlers = []
|
||||
self.access_handlers = []
|
||||
self.logfile = None
|
||||
self.lock = Lock()
|
||||
self.cfg = cfg
|
||||
self.setup(cfg)
|
||||
|
||||
def setup(self, cfg): # type: ignore[override]
|
||||
# Let Gunicorn configure its handlers first
|
||||
super().setup(cfg)
|
||||
|
||||
# Align uvicorn loggers with our format/handlers so all logs are consistent
|
||||
with suppress(Exception):
|
||||
uvicorn_error = getLogger("uvicorn.error")
|
||||
uvicorn_access = getLogger("uvicorn.access")
|
||||
|
||||
def _sync_handlers(target, source):
|
||||
# Replace target handlers with ours, enforcing formatter
|
||||
target.handlers = []
|
||||
for h in source.handlers:
|
||||
if h.formatter is None or getattr(h.formatter, "_fmt", None) != LOG_FORMAT:
|
||||
h.setFormatter(Formatter(fmt=LOG_FORMAT, datefmt=DATE_FORMAT)) # type: ignore
|
||||
target.addHandler(h)
|
||||
target.setLevel(source.level)
|
||||
target.propagate = False
|
||||
|
||||
_sync_handlers(uvicorn_error, self.error_log)
|
||||
_sync_handlers(uvicorn_access, self.access_log)
|
||||
|
||||
# Rename loggers by filtering records' name to match API naming
|
||||
class _RenameFilter:
|
||||
def __init__(self, new_name: str):
|
||||
self._new_name = new_name
|
||||
|
||||
def filter(self, record): # type: ignore[override]
|
||||
record.name = self._new_name
|
||||
return True
|
||||
|
||||
uvicorn_error.addFilter(_RenameFilter("API"))
|
||||
uvicorn_access.addFilter(_RenameFilter("API.access"))
|
||||
|
||||
# Ensure uvicorn logger families use the same level
|
||||
for lname, src in (
|
||||
("uvicorn", self.error_log),
|
||||
("uvicorn.server", self.error_log),
|
||||
):
|
||||
with suppress(Exception):
|
||||
getLogger(lname).setLevel(src.level) # type: ignore
|
||||
12
src/api/utils/worker.py
Normal file
12
src/api/utils/worker.py
Normal file
|
|
@ -0,0 +1,12 @@
|
|||
from typing import Any, Dict
|
||||
from uvicorn.workers import UvicornWorker
|
||||
|
||||
|
||||
class ApiUvicornWorker(UvicornWorker):
|
||||
CONFIG_KWARGS: Dict[str, Any] = {
|
||||
"loop": "auto",
|
||||
"http": "auto",
|
||||
"proxy_headers": True,
|
||||
"server_header": False,
|
||||
"date_header": False,
|
||||
}
|
||||
|
|
@ -62,8 +62,8 @@ RUN cp helpers/bwcli /usr/bin/ && \
|
|||
for dir in $(echo "pro/plugins configs/http configs/stream configs/server-http configs/server-stream configs/default-server-http configs/default-server-stream configs/modsec configs/modsec-crs configs/crs-plugins-before configs/crs-plugins-after") ; do mkdir "/data/${dir}" ; done && \
|
||||
chown -R root:autoconf INTEGRATION /data /var/cache/bunkerweb /var/lib/bunkerweb /var/www/html /etc/bunkerweb /var/tmp/bunkerweb /var/run/bunkerweb /var/log/bunkerweb /usr/bin/bwcli && \
|
||||
chmod -R 770 /data /var/cache/bunkerweb /var/lib/bunkerweb /var/www/html /etc/bunkerweb /var/tmp/bunkerweb /var/run/bunkerweb /var/log/bunkerweb && \
|
||||
find . \( -path './autoconf' -o -path './cli' -o -path './core' -o -path './db' -o -path './helpers' -o -path './deps' \) -prune -o -type f -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
find autoconf cli core db helpers deps -type f ! -path 'deps/python/bin/*' ! -name '*.py' ! -name '*.pyc' ! -name '*.sh' ! -name '*.so' -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
find . \( -path './autoconf' -o -path './cli' -o -path './core' -o -path './db' -o -path './helpers' -o -path './utils' -o -path './deps' \) -prune -o -type f -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
find autoconf cli core db helpers utils deps -type f ! -path 'deps/python/bin/*' ! -name '*.py' ! -name '*.pyc' ! -name '*.sh' ! -name '*.so' -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
chmod 660 INTEGRATION
|
||||
|
||||
LABEL maintainer="Bunkerity <contact@bunkerity.com>"
|
||||
|
|
|
|||
|
|
@ -89,8 +89,8 @@ RUN cp helpers/bwcli /usr/bin/ && \
|
|||
chown -R root:nginx /data /etc/nginx /var/cache/bunkerweb /var/lib/bunkerweb /var/www/html /etc/bunkerweb /var/tmp/bunkerweb /var/run/bunkerweb /var/log/bunkerweb /usr/bin/bwcli && \
|
||||
chmod -R 770 /data /etc/nginx /var/cache/bunkerweb /var/lib/bunkerweb /var/www/html /etc/bunkerweb /var/tmp/bunkerweb /var/log/bunkerweb /var/run/bunkerweb && \
|
||||
chmod 2770 /var/tmp/bunkerweb && \
|
||||
find . \( -path './cli' -o -path './lua' -o -path './core' -o -path './gen' -o -path './helpers' -o -path './deps' \) -prune -o -type f -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
find cli lua gen helpers deps -type f ! -path 'deps/bin/*' ! -path 'deps/python/bin/*' ! -name '*.lua' ! -name '*.py' ! -name '*.pyc' ! -name '*.sh' ! -name '*.so' -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
find . \( -path './cli' -o -path './lua' -o -path './core' -o -path './gen' -o -path './helpers' -o -path './utils' -o -path './deps' \) -prune -o -type f -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
find cli lua gen helpers utils deps -type f ! -path 'deps/bin/*' ! -path 'deps/python/bin/*' ! -name '*.lua' ! -name '*.py' ! -name '*.pyc' ! -name '*.sh' ! -name '*.so' -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
find core -type f ! -name '*.lua' -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
chmod 550 entrypoint.sh && \
|
||||
rm -f /var/log/bunkerweb/* && \
|
||||
|
|
|
|||
|
|
@ -91,7 +91,7 @@ class Database:
|
|||
SUFFIX_RX = re_compile(r"(?P<setting>.+)_(?P<suffix>\d+)$")
|
||||
|
||||
def __init__(
|
||||
self, logger: Logger, sqlalchemy_string: Optional[str] = None, *, ui: bool = False, pool: Optional[bool] = None, log: bool = True, **kwargs
|
||||
self, logger: Logger, sqlalchemy_string: Optional[str] = None, *, external: bool = False, pool: Optional[bool] = None, log: bool = True, **kwargs
|
||||
) -> None:
|
||||
"""Initialize the database"""
|
||||
self.logger = logger
|
||||
|
|
@ -134,7 +134,7 @@ class Database:
|
|||
# Handle SQLite database
|
||||
if db_type.startswith("sqlite"):
|
||||
path = Path(db_path)
|
||||
if ui:
|
||||
if external:
|
||||
while not path.is_file():
|
||||
if log:
|
||||
self.logger.warning(f"Waiting for the database file to be created: {path}")
|
||||
|
|
@ -2036,6 +2036,7 @@ class Database:
|
|||
for key in config.copy().keys():
|
||||
if (original_config is None or key not in ("SERVER_NAME", "MULTISITE", "USE_TEMPLATE")) and not key.startswith(f"{service}_"):
|
||||
del config[key]
|
||||
continue
|
||||
if original_config is None:
|
||||
config[key.replace(f"{service}_", "")] = config.pop(key)
|
||||
|
||||
|
|
@ -3612,8 +3613,8 @@ class Database:
|
|||
"async": job.run_async,
|
||||
"history": [
|
||||
{
|
||||
"start_date": job_run.start_date.isoformat(),
|
||||
"end_date": job_run.end_date.isoformat(),
|
||||
"start_date": job_run.start_date.astimezone().isoformat(),
|
||||
"end_date": job_run.end_date.astimezone().isoformat(),
|
||||
"success": job_run.success,
|
||||
}
|
||||
for job_run in session.query(Jobs_runs)
|
||||
|
|
@ -3892,6 +3893,48 @@ class Database:
|
|||
|
||||
return ""
|
||||
|
||||
def update_instance_fields(
|
||||
self,
|
||||
hostname: str,
|
||||
*,
|
||||
name: Optional[str] = None,
|
||||
port: Optional[int] = None,
|
||||
server_name: Optional[str] = None,
|
||||
method: Optional[str] = None,
|
||||
changed: Optional[bool] = True,
|
||||
) -> str:
|
||||
"""Update instance metadata fields (name, port, server_name, method)."""
|
||||
with self._db_session() as session:
|
||||
if self.readonly:
|
||||
return "The database is read-only, the changes will not be saved"
|
||||
|
||||
db_instance = session.query(Instances).filter_by(hostname=hostname).first()
|
||||
if db_instance is None:
|
||||
return f"Instance {hostname} does not exist, will not be updated."
|
||||
|
||||
if name is not None:
|
||||
db_instance.name = name
|
||||
if port is not None:
|
||||
db_instance.port = port
|
||||
if server_name is not None:
|
||||
db_instance.server_name = server_name
|
||||
if method is not None:
|
||||
db_instance.method = method
|
||||
|
||||
if changed:
|
||||
with suppress(ProgrammingError, OperationalError):
|
||||
metadata = session.query(Metadata).get(1)
|
||||
if metadata is not None:
|
||||
metadata.instances_changed = True
|
||||
metadata.last_instances_change = datetime.now().astimezone()
|
||||
|
||||
try:
|
||||
session.commit()
|
||||
except BaseException as e:
|
||||
return f"An error occurred while updating the instance {hostname}.\n{e}"
|
||||
|
||||
return ""
|
||||
|
||||
def get_instances(self, *, method: Optional[str] = None) -> List[Dict[str, Any]]:
|
||||
"""Get instances."""
|
||||
with self._db_session() as session:
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ from sqlalchemy.schema import UniqueConstraint
|
|||
|
||||
CONTEXTS_ENUM = Enum("global", "multisite", name="contexts_enum")
|
||||
SETTINGS_TYPES_ENUM = Enum("password", "text", "number", "check", "select", "multiselect", "multivalue", name="settings_types_enum")
|
||||
METHODS_ENUM = Enum("ui", "scheduler", "autoconf", "manual", "wizard", name="methods_enum")
|
||||
METHODS_ENUM = Enum("api", "ui", "scheduler", "autoconf", "manual", "wizard", name="methods_enum")
|
||||
SCHEDULES_ENUM = Enum("once", "minute", "hour", "day", "week", name="schedules_enum")
|
||||
CUSTOM_CONFIGS_TYPES_ENUM = Enum(
|
||||
"http",
|
||||
|
|
@ -481,3 +481,87 @@ class UserColumnsPreferences(Base):
|
|||
columns = Column(JSONText, nullable=False)
|
||||
|
||||
user = relationship("Users", back_populates="columns_preferences")
|
||||
|
||||
|
||||
## API Models
|
||||
|
||||
API_PERMISSION_ENUM = Enum(
|
||||
# Instance permissions
|
||||
"instances_create",
|
||||
"instances_read",
|
||||
"instances_update",
|
||||
"instances_delete",
|
||||
"instances_execute",
|
||||
# Global config permissions
|
||||
"global_config_read",
|
||||
"global_config_update",
|
||||
# Service permissions
|
||||
"service_create",
|
||||
"service_read",
|
||||
"service_update",
|
||||
"service_delete",
|
||||
"service_convert",
|
||||
"service_export",
|
||||
# Config permissions
|
||||
"config_create",
|
||||
"configs_read",
|
||||
"config_read",
|
||||
"config_update",
|
||||
"config_delete",
|
||||
# Plugin permissions
|
||||
"plugin_create",
|
||||
"plugin_read",
|
||||
"plugin_delete",
|
||||
# Cache permissions
|
||||
"cache_read",
|
||||
"cache_delete",
|
||||
# Ban permissions
|
||||
"ban_created",
|
||||
"ban_read",
|
||||
"ban_update",
|
||||
"ban_delete",
|
||||
# Job permissions
|
||||
"job_read",
|
||||
"job_run",
|
||||
name="api_permission_enum",
|
||||
)
|
||||
|
||||
API_RESOURCE_ENUM = Enum(
|
||||
"instances",
|
||||
"global_config",
|
||||
"services",
|
||||
"configs",
|
||||
"plugins",
|
||||
"cache",
|
||||
"bans",
|
||||
"jobs",
|
||||
name="api_resource_enum",
|
||||
)
|
||||
|
||||
|
||||
class API_users(Base):
|
||||
__tablename__ = "bw_api_users"
|
||||
|
||||
username = Column(String(256), primary_key=True)
|
||||
password = Column(String(60), nullable=False)
|
||||
method = Column(METHODS_ENUM, nullable=False, default="manual")
|
||||
admin = Column(Boolean, nullable=False, default=False)
|
||||
creation_date = Column(DateTime(timezone=True), nullable=False)
|
||||
update_date = Column(DateTime(timezone=True), nullable=False)
|
||||
|
||||
permissions = relationship("API_permissions", back_populates="user", cascade="all, delete-orphan")
|
||||
|
||||
|
||||
class API_permissions(Base):
|
||||
__tablename__ = "bw_api_user_permissions"
|
||||
|
||||
id = Column(Integer, Identity(start=1, increment=1), primary_key=True)
|
||||
api_user = Column(String(256), ForeignKey("bw_api_users.username", onupdate="cascade", ondelete="cascade"), nullable=False)
|
||||
resource_type = Column(API_RESOURCE_ENUM, nullable=False)
|
||||
resource_id = Column(String(256), nullable=True)
|
||||
permission = Column(String(512), nullable=False)
|
||||
granted = Column(Boolean, nullable=False)
|
||||
created_at = Column(DateTime(timezone=True), nullable=False)
|
||||
updated_at = Column(DateTime(timezone=True), nullable=False)
|
||||
|
||||
user = relationship("API_users", back_populates="permissions")
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
alembic==1.16.5
|
||||
cryptography==45.0.7
|
||||
# oracledb==3.0.0
|
||||
psycopg[c,pool]==3.2.9
|
||||
psycopg[c,pool]==3.2.10
|
||||
PyMySQL==1.1.2
|
||||
sqlalchemy==2.0.43
|
||||
|
|
|
|||
|
|
@ -8,74 +8,91 @@ alembic==1.16.5 \
|
|||
--hash=sha256:a88bb7f6e513bd4301ecf4c7f2206fe93f9913f9b48dac3b78babde2d6fe765e \
|
||||
--hash=sha256:e845dfe090c5ffa7b92593ae6687c5cb1a101e91fa53868497dbd79847f9dbe3
|
||||
# via -r requirements.armv7.in
|
||||
cffi==1.17.1 \
|
||||
--hash=sha256:045d61c734659cc045141be4bae381a41d89b741f795af1dd018bfb532fd0df8 \
|
||||
--hash=sha256:0984a4925a435b1da406122d4d7968dd861c1385afe3b45ba82b750f229811e2 \
|
||||
--hash=sha256:0e2b1fac190ae3ebfe37b979cc1ce69c81f4e4fe5746bb401dca63a9062cdaf1 \
|
||||
--hash=sha256:0f048dcf80db46f0098ccac01132761580d28e28bc0f78ae0d58048063317e15 \
|
||||
--hash=sha256:1257bdabf294dceb59f5e70c64a3e2f462c30c7ad68092d01bbbfb1c16b1ba36 \
|
||||
--hash=sha256:1c39c6016c32bc48dd54561950ebd6836e1670f2ae46128f67cf49e789c52824 \
|
||||
--hash=sha256:1d599671f396c4723d016dbddb72fe8e0397082b0a77a4fab8028923bec050e8 \
|
||||
--hash=sha256:28b16024becceed8c6dfbc75629e27788d8a3f9030691a1dbf9821a128b22c36 \
|
||||
--hash=sha256:2bb1a08b8008b281856e5971307cc386a8e9c5b625ac297e853d36da6efe9c17 \
|
||||
--hash=sha256:30c5e0cb5ae493c04c8b42916e52ca38079f1b235c2f8ae5f4527b963c401caf \
|
||||
--hash=sha256:31000ec67d4221a71bd3f67df918b1f88f676f1c3b535a7eb473255fdc0b83fc \
|
||||
--hash=sha256:386c8bf53c502fff58903061338ce4f4950cbdcb23e2902d86c0f722b786bbe3 \
|
||||
--hash=sha256:3edc8d958eb099c634dace3c7e16560ae474aa3803a5df240542b305d14e14ed \
|
||||
--hash=sha256:45398b671ac6d70e67da8e4224a065cec6a93541bb7aebe1b198a61b58c7b702 \
|
||||
--hash=sha256:46bf43160c1a35f7ec506d254e5c890f3c03648a4dbac12d624e4490a7046cd1 \
|
||||
--hash=sha256:4ceb10419a9adf4460ea14cfd6bc43d08701f0835e979bf821052f1805850fe8 \
|
||||
--hash=sha256:51392eae71afec0d0c8fb1a53b204dbb3bcabcb3c9b807eedf3e1e6ccf2de903 \
|
||||
--hash=sha256:5da5719280082ac6bd9aa7becb3938dc9f9cbd57fac7d2871717b1feb0902ab6 \
|
||||
--hash=sha256:610faea79c43e44c71e1ec53a554553fa22321b65fae24889706c0a84d4ad86d \
|
||||
--hash=sha256:636062ea65bd0195bc012fea9321aca499c0504409f413dc88af450b57ffd03b \
|
||||
--hash=sha256:6883e737d7d9e4899a8a695e00ec36bd4e5e4f18fabe0aca0efe0a4b44cdb13e \
|
||||
--hash=sha256:6b8b4a92e1c65048ff98cfe1f735ef8f1ceb72e3d5f0c25fdb12087a23da22be \
|
||||
--hash=sha256:6f17be4345073b0a7b8ea599688f692ac3ef23ce28e5df79c04de519dbc4912c \
|
||||
--hash=sha256:706510fe141c86a69c8ddc029c7910003a17353970cff3b904ff0686a5927683 \
|
||||
--hash=sha256:72e72408cad3d5419375fc87d289076ee319835bdfa2caad331e377589aebba9 \
|
||||
--hash=sha256:733e99bc2df47476e3848417c5a4540522f234dfd4ef3ab7fafdf555b082ec0c \
|
||||
--hash=sha256:7596d6620d3fa590f677e9ee430df2958d2d6d6de2feeae5b20e82c00b76fbf8 \
|
||||
--hash=sha256:78122be759c3f8a014ce010908ae03364d00a1f81ab5c7f4a7a5120607ea56e1 \
|
||||
--hash=sha256:805b4371bf7197c329fcb3ead37e710d1bca9da5d583f5073b799d5c5bd1eee4 \
|
||||
--hash=sha256:85a950a4ac9c359340d5963966e3e0a94a676bd6245a4b55bc43949eee26a655 \
|
||||
--hash=sha256:8f2cdc858323644ab277e9bb925ad72ae0e67f69e804f4898c070998d50b1a67 \
|
||||
--hash=sha256:9755e4345d1ec879e3849e62222a18c7174d65a6a92d5b346b1863912168b595 \
|
||||
--hash=sha256:98e3969bcff97cae1b2def8ba499ea3d6f31ddfdb7635374834cf89a1a08ecf0 \
|
||||
--hash=sha256:a08d7e755f8ed21095a310a693525137cfe756ce62d066e53f502a83dc550f65 \
|
||||
--hash=sha256:a1ed2dd2972641495a3ec98445e09766f077aee98a1c896dcb4ad0d303628e41 \
|
||||
--hash=sha256:a24ed04c8ffd54b0729c07cee15a81d964e6fee0e3d4d342a27b020d22959dc6 \
|
||||
--hash=sha256:a45e3c6913c5b87b3ff120dcdc03f6131fa0065027d0ed7ee6190736a74cd401 \
|
||||
--hash=sha256:a9b15d491f3ad5d692e11f6b71f7857e7835eb677955c00cc0aefcd0669adaf6 \
|
||||
--hash=sha256:ad9413ccdeda48c5afdae7e4fa2192157e991ff761e7ab8fdd8926f40b160cc3 \
|
||||
--hash=sha256:b2ab587605f4ba0bf81dc0cb08a41bd1c0a5906bd59243d56bad7668a6fc6c16 \
|
||||
--hash=sha256:b62ce867176a75d03a665bad002af8e6d54644fad99a3c70905c543130e39d93 \
|
||||
--hash=sha256:c03e868a0b3bc35839ba98e74211ed2b05d2119be4e8a0f224fba9384f1fe02e \
|
||||
--hash=sha256:c59d6e989d07460165cc5ad3c61f9fd8f1b4796eacbd81cee78957842b834af4 \
|
||||
--hash=sha256:c7eac2ef9b63c79431bc4b25f1cd649d7f061a28808cbc6c47b534bd789ef964 \
|
||||
--hash=sha256:c9c3d058ebabb74db66e431095118094d06abf53284d9c81f27300d0e0d8bc7c \
|
||||
--hash=sha256:ca74b8dbe6e8e8263c0ffd60277de77dcee6c837a3d0881d8c1ead7268c9e576 \
|
||||
--hash=sha256:caaf0640ef5f5517f49bc275eca1406b0ffa6aa184892812030f04c2abf589a0 \
|
||||
--hash=sha256:cdf5ce3acdfd1661132f2a9c19cac174758dc2352bfe37d98aa7512c6b7178b3 \
|
||||
--hash=sha256:d016c76bdd850f3c626af19b0542c9677ba156e4ee4fccfdd7848803533ef662 \
|
||||
--hash=sha256:d01b12eeeb4427d3110de311e1774046ad344f5b1a7403101878976ecd7a10f3 \
|
||||
--hash=sha256:d63afe322132c194cf832bfec0dc69a99fb9bb6bbd550f161a49e9e855cc78ff \
|
||||
--hash=sha256:da95af8214998d77a98cc14e3a3bd00aa191526343078b530ceb0bd710fb48a5 \
|
||||
--hash=sha256:dd398dbc6773384a17fe0d3e7eeb8d1a21c2200473ee6806bb5e6a8e62bb73dd \
|
||||
--hash=sha256:de2ea4b5833625383e464549fec1bc395c1bdeeb5f25c4a3a82b5a8c756ec22f \
|
||||
--hash=sha256:de55b766c7aa2e2a3092c51e0483d700341182f08e67c63630d5b6f200bb28e5 \
|
||||
--hash=sha256:df8b1c11f177bc2313ec4b2d46baec87a5f3e71fc8b45dab2ee7cae86d9aba14 \
|
||||
--hash=sha256:e03eab0a8677fa80d646b5ddece1cbeaf556c313dcfac435ba11f107ba117b5d \
|
||||
--hash=sha256:e221cf152cff04059d011ee126477f0d9588303eb57e88923578ace7baad17f9 \
|
||||
--hash=sha256:e31ae45bc2e29f6b2abd0de1cc3b9d5205aa847cafaecb8af1476a609a2f6eb7 \
|
||||
--hash=sha256:edae79245293e15384b51f88b00613ba9f7198016a5948b5dddf4917d4d26382 \
|
||||
--hash=sha256:f1e22e8c4419538cb197e4dd60acc919d7696e5ef98ee4da4e01d3f8cfa4cc5a \
|
||||
--hash=sha256:f3a2b4222ce6b60e2e8b337bb9596923045681d71e5a082783484d845390938e \
|
||||
--hash=sha256:f6a16c31041f09ead72d69f583767292f750d24913dadacf5756b966aacb3f1a \
|
||||
--hash=sha256:f75c7ab1f9e4aca5414ed4d8e5c0e303a34f4421f8a0d47a4d019ceff0ab6af4 \
|
||||
--hash=sha256:f79fc4fc25f1c8698ff97788206bb3c2598949bfe0fef03d299eb1b5356ada99 \
|
||||
--hash=sha256:f7f5baafcc48261359e14bcd6d9bff6d4b28d9103847c9e136694cb0501aef87 \
|
||||
--hash=sha256:fc48c783f9c87e60831201f2cce7f3b2e4846bf4d8728eabe54d60700b318a0b
|
||||
cffi==2.0.0 \
|
||||
--hash=sha256:00bdf7acc5f795150faa6957054fbbca2439db2f775ce831222b66f192f03beb \
|
||||
--hash=sha256:07b271772c100085dd28b74fa0cd81c8fb1a3ba18b21e03d7c27f3436a10606b \
|
||||
--hash=sha256:087067fa8953339c723661eda6b54bc98c5625757ea62e95eb4898ad5e776e9f \
|
||||
--hash=sha256:0a1527a803f0a659de1af2e1fd700213caba79377e27e4693648c2923da066f9 \
|
||||
--hash=sha256:0cf2d91ecc3fcc0625c2c530fe004f82c110405f101548512cce44322fa8ac44 \
|
||||
--hash=sha256:0f6084a0ea23d05d20c3edcda20c3d006f9b6f3fefeac38f59262e10cef47ee2 \
|
||||
--hash=sha256:12873ca6cb9b0f0d3a0da705d6086fe911591737a59f28b7936bdfed27c0d47c \
|
||||
--hash=sha256:19f705ada2530c1167abacb171925dd886168931e0a7b78f5bffcae5c6b5be75 \
|
||||
--hash=sha256:1cd13c99ce269b3ed80b417dcd591415d3372bcac067009b6e0f59c7d4015e65 \
|
||||
--hash=sha256:1e3a615586f05fc4065a8b22b8152f0c1b00cdbc60596d187c2a74f9e3036e4e \
|
||||
--hash=sha256:1f72fb8906754ac8a2cc3f9f5aaa298070652a0ffae577e0ea9bd480dc3c931a \
|
||||
--hash=sha256:1fc9ea04857caf665289b7a75923f2c6ed559b8298a1b8c49e59f7dd95c8481e \
|
||||
--hash=sha256:203a48d1fb583fc7d78a4c6655692963b860a417c0528492a6bc21f1aaefab25 \
|
||||
--hash=sha256:2081580ebb843f759b9f617314a24ed5738c51d2aee65d31e02f6f7a2b97707a \
|
||||
--hash=sha256:21d1152871b019407d8ac3985f6775c079416c282e431a4da6afe7aefd2bccbe \
|
||||
--hash=sha256:24b6f81f1983e6df8db3adc38562c83f7d4a0c36162885ec7f7b77c7dcbec97b \
|
||||
--hash=sha256:256f80b80ca3853f90c21b23ee78cd008713787b1b1e93eae9f3d6a7134abd91 \
|
||||
--hash=sha256:28a3a209b96630bca57cce802da70c266eb08c6e97e5afd61a75611ee6c64592 \
|
||||
--hash=sha256:2c8f814d84194c9ea681642fd164267891702542f028a15fc97d4674b6206187 \
|
||||
--hash=sha256:2de9a304e27f7596cd03d16f1b7c72219bd944e99cc52b84d0145aefb07cbd3c \
|
||||
--hash=sha256:38100abb9d1b1435bc4cc340bb4489635dc2f0da7456590877030c9b3d40b0c1 \
|
||||
--hash=sha256:3925dd22fa2b7699ed2617149842d2e6adde22b262fcbfada50e3d195e4b3a94 \
|
||||
--hash=sha256:3e17ed538242334bf70832644a32a7aae3d83b57567f9fd60a26257e992b79ba \
|
||||
--hash=sha256:3e837e369566884707ddaf85fc1744b47575005c0a229de3327f8f9a20f4efeb \
|
||||
--hash=sha256:3f4d46d8b35698056ec29bca21546e1551a205058ae1a181d871e278b0b28165 \
|
||||
--hash=sha256:44d1b5909021139fe36001ae048dbdde8214afa20200eda0f64c068cac5d5529 \
|
||||
--hash=sha256:45d5e886156860dc35862657e1494b9bae8dfa63bf56796f2fb56e1679fc0bca \
|
||||
--hash=sha256:4647afc2f90d1ddd33441e5b0e85b16b12ddec4fca55f0d9671fef036ecca27c \
|
||||
--hash=sha256:4671d9dd5ec934cb9a73e7ee9676f9362aba54f7f34910956b84d727b0d73fb6 \
|
||||
--hash=sha256:53f77cbe57044e88bbd5ed26ac1d0514d2acf0591dd6bb02a3ae37f76811b80c \
|
||||
--hash=sha256:5eda85d6d1879e692d546a078b44251cdd08dd1cfb98dfb77b670c97cee49ea0 \
|
||||
--hash=sha256:5fed36fccc0612a53f1d4d9a816b50a36702c28a2aa880cb8a122b3466638743 \
|
||||
--hash=sha256:61d028e90346df14fedc3d1e5441df818d095f3b87d286825dfcbd6459b7ef63 \
|
||||
--hash=sha256:66f011380d0e49ed280c789fbd08ff0d40968ee7b665575489afa95c98196ab5 \
|
||||
--hash=sha256:6824f87845e3396029f3820c206e459ccc91760e8fa24422f8b0c3d1731cbec5 \
|
||||
--hash=sha256:6c6c373cfc5c83a975506110d17457138c8c63016b563cc9ed6e056a82f13ce4 \
|
||||
--hash=sha256:6d02d6655b0e54f54c4ef0b94eb6be0607b70853c45ce98bd278dc7de718be5d \
|
||||
--hash=sha256:6d50360be4546678fc1b79ffe7a66265e28667840010348dd69a314145807a1b \
|
||||
--hash=sha256:730cacb21e1bdff3ce90babf007d0a0917cc3e6492f336c2f0134101e0944f93 \
|
||||
--hash=sha256:737fe7d37e1a1bffe70bd5754ea763a62a066dc5913ca57e957824b72a85e205 \
|
||||
--hash=sha256:74a03b9698e198d47562765773b4a8309919089150a0bb17d829ad7b44b60d27 \
|
||||
--hash=sha256:7553fb2090d71822f02c629afe6042c299edf91ba1bf94951165613553984512 \
|
||||
--hash=sha256:7a66c7204d8869299919db4d5069a82f1561581af12b11b3c9f48c584eb8743d \
|
||||
--hash=sha256:7cc09976e8b56f8cebd752f7113ad07752461f48a58cbba644139015ac24954c \
|
||||
--hash=sha256:81afed14892743bbe14dacb9e36d9e0e504cd204e0b165062c488942b9718037 \
|
||||
--hash=sha256:8941aaadaf67246224cee8c3803777eed332a19d909b47e29c9842ef1e79ac26 \
|
||||
--hash=sha256:89472c9762729b5ae1ad974b777416bfda4ac5642423fa93bd57a09204712322 \
|
||||
--hash=sha256:8ea985900c5c95ce9db1745f7933eeef5d314f0565b27625d9a10ec9881e1bfb \
|
||||
--hash=sha256:8eca2a813c1cb7ad4fb74d368c2ffbbb4789d377ee5bb8df98373c2cc0dee76c \
|
||||
--hash=sha256:92b68146a71df78564e4ef48af17551a5ddd142e5190cdf2c5624d0c3ff5b2e8 \
|
||||
--hash=sha256:9332088d75dc3241c702d852d4671613136d90fa6881da7d770a483fd05248b4 \
|
||||
--hash=sha256:94698a9c5f91f9d138526b48fe26a199609544591f859c870d477351dc7b2414 \
|
||||
--hash=sha256:9a67fc9e8eb39039280526379fb3a70023d77caec1852002b4da7e8b270c4dd9 \
|
||||
--hash=sha256:9de40a7b0323d889cf8d23d1ef214f565ab154443c42737dfe52ff82cf857664 \
|
||||
--hash=sha256:a05d0c237b3349096d3981b727493e22147f934b20f6f125a3eba8f994bec4a9 \
|
||||
--hash=sha256:afb8db5439b81cf9c9d0c80404b60c3cc9c3add93e114dcae767f1477cb53775 \
|
||||
--hash=sha256:b18a3ed7d5b3bd8d9ef7a8cb226502c6bf8308df1525e1cc676c3680e7176739 \
|
||||
--hash=sha256:b1e74d11748e7e98e2f426ab176d4ed720a64412b6a15054378afdb71e0f37dc \
|
||||
--hash=sha256:b21e08af67b8a103c71a250401c78d5e0893beff75e28c53c98f4de42f774062 \
|
||||
--hash=sha256:b4c854ef3adc177950a8dfc81a86f5115d2abd545751a304c5bcf2c2c7283cfe \
|
||||
--hash=sha256:b882b3df248017dba09d6b16defe9b5c407fe32fc7c65a9c69798e6175601be9 \
|
||||
--hash=sha256:baf5215e0ab74c16e2dd324e8ec067ef59e41125d3eade2b863d294fd5035c92 \
|
||||
--hash=sha256:c649e3a33450ec82378822b3dad03cc228b8f5963c0c12fc3b1e0ab940f768a5 \
|
||||
--hash=sha256:c654de545946e0db659b3400168c9ad31b5d29593291482c43e3564effbcee13 \
|
||||
--hash=sha256:c6638687455baf640e37344fe26d37c404db8b80d037c3d29f58fe8d1c3b194d \
|
||||
--hash=sha256:c8d3b5532fc71b7a77c09192b4a5a200ea992702734a2e9279a37f2478236f26 \
|
||||
--hash=sha256:cb527a79772e5ef98fb1d700678fe031e353e765d1ca2d409c92263c6d43e09f \
|
||||
--hash=sha256:cf364028c016c03078a23b503f02058f1814320a56ad535686f90565636a9495 \
|
||||
--hash=sha256:d48a880098c96020b02d5a1f7d9251308510ce8858940e6fa99ece33f610838b \
|
||||
--hash=sha256:d68b6cef7827e8641e8ef16f4494edda8b36104d79773a334beaa1e3521430f6 \
|
||||
--hash=sha256:d9b29c1f0ae438d5ee9acb31cadee00a58c46cc9c0b2f9038c6b0b3470877a8c \
|
||||
--hash=sha256:d9b97165e8aed9272a6bb17c01e3cc5871a594a446ebedc996e2397a1c1ea8ef \
|
||||
--hash=sha256:da68248800ad6320861f129cd9c1bf96ca849a2771a59e0344e88681905916f5 \
|
||||
--hash=sha256:da902562c3e9c550df360bfa53c035b2f241fed6d9aef119048073680ace4a18 \
|
||||
--hash=sha256:dbd5c7a25a7cb98f5ca55d258b103a2054f859a46ae11aaf23134f9cc0d356ad \
|
||||
--hash=sha256:dd4f05f54a52fb558f1ba9f528228066954fee3ebe629fc1660d874d040ae5a3 \
|
||||
--hash=sha256:de8dad4425a6ca6e4e5e297b27b5c824ecc7581910bf9aee86cb6835e6812aa7 \
|
||||
--hash=sha256:e11e82b744887154b182fd3e7e8512418446501191994dbf9c9fc1f32cc8efd5 \
|
||||
--hash=sha256:e6e73b9e02893c764e7e8d5bb5ce277f1a009cd5243f8228f75f842bf937c534 \
|
||||
--hash=sha256:f73b96c41e3b2adedc34a7356e64c8eb96e03a3782b535e043a986276ce12a49 \
|
||||
--hash=sha256:f93fd8e5c8c0a4aa1f424d6173f14a892044054871c771f8566e4008eaa359d2 \
|
||||
--hash=sha256:fc33c5141b55ed366cfaad382df24fe7dcbc686de5be719b207bb248e3053dc5 \
|
||||
--hash=sha256:fc7de24befaeae77ba923797c7c87834c73648a05a4bde34b3b7e5588973a453 \
|
||||
--hash=sha256:fe562eb1a64e67dd297ccc4f5addea2501664954f2692b69a76449ec7913ecbf
|
||||
# via cryptography
|
||||
cryptography==45.0.7 \
|
||||
--hash=sha256:06ce84dc14df0bf6ea84666f958e6080cdb6fe1231be2a51f3fc1267d9f3fb34 \
|
||||
|
|
@ -239,20 +256,20 @@ markupsafe==3.0.2 \
|
|||
--hash=sha256:f8b3d067f2e40fe93e1ccdd6b2e1d16c43140e76f02fb1319a05cf2b79d99430 \
|
||||
--hash=sha256:fcabf5ff6eea076f859677f5f0b6b5c1a51e70a376b0579e0eadef8db48c6b50
|
||||
# via mako
|
||||
psycopg==3.2.9 \
|
||||
--hash=sha256:01a8dadccdaac2123c916208c96e06631641c0566b22005493f09663c7a8d3b6 \
|
||||
--hash=sha256:2fbb46fcd17bc81f993f28c47f1ebea38d66ae97cc2dbc3cad73b37cefbff700
|
||||
psycopg==3.2.10 \
|
||||
--hash=sha256:0bce99269d16ed18401683a8569b2c5abd94f72f8364856d56c0389bcd50972a \
|
||||
--hash=sha256:ab5caf09a9ec42e314a21f5216dbcceac528e0e05142e42eea83a3b28b320ac3
|
||||
# via -r requirements.armv7.in
|
||||
psycopg-c==3.2.9 \
|
||||
--hash=sha256:8c9f654f20c6c56bddc4543a3caab236741ee94b6732ab7090b95605502210e2
|
||||
psycopg-c==3.2.10 \
|
||||
--hash=sha256:30183897f5fe7ff4375b7dfcec9d44dfe8a5e009080addc1626889324a9eb1ed
|
||||
# via psycopg
|
||||
psycopg-pool==3.2.6 \
|
||||
--hash=sha256:0f92a7817719517212fbfe2fd58b8c35c1850cdd2a80d36b581ba2085d9148e5 \
|
||||
--hash=sha256:5887318a9f6af906d041a0b1dc1c60f8f0dda8340c2572b74e10907b51ed5da7
|
||||
# via psycopg
|
||||
pycparser==2.22 \
|
||||
--hash=sha256:491c8be9c040f5390f5bf44a5b07752bd07f56edf992381b05c701439eec10f6 \
|
||||
--hash=sha256:c3702b6d3dd8c7abc1afa565d7e63d53a1d0bd86cdc24edd75470f4de499cfcc
|
||||
pycparser==2.23 \
|
||||
--hash=sha256:78816d4f24add8f10a06d6f05b4d424ad9e96cfebf68a4ddc99c65c0720d00c2 \
|
||||
--hash=sha256:e5c6e8d3fbad53479cab09ac03729e0a9faf2bee3db8208a550daf5af81a5934
|
||||
# via cffi
|
||||
pymysql==1.1.2 \
|
||||
--hash=sha256:4961d3e165614ae65014e361811a724e2044ad3ea3739de9903ae7c21f539f03 \
|
||||
|
|
|
|||
|
|
@ -8,74 +8,91 @@ alembic==1.16.5 \
|
|||
--hash=sha256:a88bb7f6e513bd4301ecf4c7f2206fe93f9913f9b48dac3b78babde2d6fe765e \
|
||||
--hash=sha256:e845dfe090c5ffa7b92593ae6687c5cb1a101e91fa53868497dbd79847f9dbe3
|
||||
# via -r requirements.in
|
||||
cffi==1.17.1 \
|
||||
--hash=sha256:045d61c734659cc045141be4bae381a41d89b741f795af1dd018bfb532fd0df8 \
|
||||
--hash=sha256:0984a4925a435b1da406122d4d7968dd861c1385afe3b45ba82b750f229811e2 \
|
||||
--hash=sha256:0e2b1fac190ae3ebfe37b979cc1ce69c81f4e4fe5746bb401dca63a9062cdaf1 \
|
||||
--hash=sha256:0f048dcf80db46f0098ccac01132761580d28e28bc0f78ae0d58048063317e15 \
|
||||
--hash=sha256:1257bdabf294dceb59f5e70c64a3e2f462c30c7ad68092d01bbbfb1c16b1ba36 \
|
||||
--hash=sha256:1c39c6016c32bc48dd54561950ebd6836e1670f2ae46128f67cf49e789c52824 \
|
||||
--hash=sha256:1d599671f396c4723d016dbddb72fe8e0397082b0a77a4fab8028923bec050e8 \
|
||||
--hash=sha256:28b16024becceed8c6dfbc75629e27788d8a3f9030691a1dbf9821a128b22c36 \
|
||||
--hash=sha256:2bb1a08b8008b281856e5971307cc386a8e9c5b625ac297e853d36da6efe9c17 \
|
||||
--hash=sha256:30c5e0cb5ae493c04c8b42916e52ca38079f1b235c2f8ae5f4527b963c401caf \
|
||||
--hash=sha256:31000ec67d4221a71bd3f67df918b1f88f676f1c3b535a7eb473255fdc0b83fc \
|
||||
--hash=sha256:386c8bf53c502fff58903061338ce4f4950cbdcb23e2902d86c0f722b786bbe3 \
|
||||
--hash=sha256:3edc8d958eb099c634dace3c7e16560ae474aa3803a5df240542b305d14e14ed \
|
||||
--hash=sha256:45398b671ac6d70e67da8e4224a065cec6a93541bb7aebe1b198a61b58c7b702 \
|
||||
--hash=sha256:46bf43160c1a35f7ec506d254e5c890f3c03648a4dbac12d624e4490a7046cd1 \
|
||||
--hash=sha256:4ceb10419a9adf4460ea14cfd6bc43d08701f0835e979bf821052f1805850fe8 \
|
||||
--hash=sha256:51392eae71afec0d0c8fb1a53b204dbb3bcabcb3c9b807eedf3e1e6ccf2de903 \
|
||||
--hash=sha256:5da5719280082ac6bd9aa7becb3938dc9f9cbd57fac7d2871717b1feb0902ab6 \
|
||||
--hash=sha256:610faea79c43e44c71e1ec53a554553fa22321b65fae24889706c0a84d4ad86d \
|
||||
--hash=sha256:636062ea65bd0195bc012fea9321aca499c0504409f413dc88af450b57ffd03b \
|
||||
--hash=sha256:6883e737d7d9e4899a8a695e00ec36bd4e5e4f18fabe0aca0efe0a4b44cdb13e \
|
||||
--hash=sha256:6b8b4a92e1c65048ff98cfe1f735ef8f1ceb72e3d5f0c25fdb12087a23da22be \
|
||||
--hash=sha256:6f17be4345073b0a7b8ea599688f692ac3ef23ce28e5df79c04de519dbc4912c \
|
||||
--hash=sha256:706510fe141c86a69c8ddc029c7910003a17353970cff3b904ff0686a5927683 \
|
||||
--hash=sha256:72e72408cad3d5419375fc87d289076ee319835bdfa2caad331e377589aebba9 \
|
||||
--hash=sha256:733e99bc2df47476e3848417c5a4540522f234dfd4ef3ab7fafdf555b082ec0c \
|
||||
--hash=sha256:7596d6620d3fa590f677e9ee430df2958d2d6d6de2feeae5b20e82c00b76fbf8 \
|
||||
--hash=sha256:78122be759c3f8a014ce010908ae03364d00a1f81ab5c7f4a7a5120607ea56e1 \
|
||||
--hash=sha256:805b4371bf7197c329fcb3ead37e710d1bca9da5d583f5073b799d5c5bd1eee4 \
|
||||
--hash=sha256:85a950a4ac9c359340d5963966e3e0a94a676bd6245a4b55bc43949eee26a655 \
|
||||
--hash=sha256:8f2cdc858323644ab277e9bb925ad72ae0e67f69e804f4898c070998d50b1a67 \
|
||||
--hash=sha256:9755e4345d1ec879e3849e62222a18c7174d65a6a92d5b346b1863912168b595 \
|
||||
--hash=sha256:98e3969bcff97cae1b2def8ba499ea3d6f31ddfdb7635374834cf89a1a08ecf0 \
|
||||
--hash=sha256:a08d7e755f8ed21095a310a693525137cfe756ce62d066e53f502a83dc550f65 \
|
||||
--hash=sha256:a1ed2dd2972641495a3ec98445e09766f077aee98a1c896dcb4ad0d303628e41 \
|
||||
--hash=sha256:a24ed04c8ffd54b0729c07cee15a81d964e6fee0e3d4d342a27b020d22959dc6 \
|
||||
--hash=sha256:a45e3c6913c5b87b3ff120dcdc03f6131fa0065027d0ed7ee6190736a74cd401 \
|
||||
--hash=sha256:a9b15d491f3ad5d692e11f6b71f7857e7835eb677955c00cc0aefcd0669adaf6 \
|
||||
--hash=sha256:ad9413ccdeda48c5afdae7e4fa2192157e991ff761e7ab8fdd8926f40b160cc3 \
|
||||
--hash=sha256:b2ab587605f4ba0bf81dc0cb08a41bd1c0a5906bd59243d56bad7668a6fc6c16 \
|
||||
--hash=sha256:b62ce867176a75d03a665bad002af8e6d54644fad99a3c70905c543130e39d93 \
|
||||
--hash=sha256:c03e868a0b3bc35839ba98e74211ed2b05d2119be4e8a0f224fba9384f1fe02e \
|
||||
--hash=sha256:c59d6e989d07460165cc5ad3c61f9fd8f1b4796eacbd81cee78957842b834af4 \
|
||||
--hash=sha256:c7eac2ef9b63c79431bc4b25f1cd649d7f061a28808cbc6c47b534bd789ef964 \
|
||||
--hash=sha256:c9c3d058ebabb74db66e431095118094d06abf53284d9c81f27300d0e0d8bc7c \
|
||||
--hash=sha256:ca74b8dbe6e8e8263c0ffd60277de77dcee6c837a3d0881d8c1ead7268c9e576 \
|
||||
--hash=sha256:caaf0640ef5f5517f49bc275eca1406b0ffa6aa184892812030f04c2abf589a0 \
|
||||
--hash=sha256:cdf5ce3acdfd1661132f2a9c19cac174758dc2352bfe37d98aa7512c6b7178b3 \
|
||||
--hash=sha256:d016c76bdd850f3c626af19b0542c9677ba156e4ee4fccfdd7848803533ef662 \
|
||||
--hash=sha256:d01b12eeeb4427d3110de311e1774046ad344f5b1a7403101878976ecd7a10f3 \
|
||||
--hash=sha256:d63afe322132c194cf832bfec0dc69a99fb9bb6bbd550f161a49e9e855cc78ff \
|
||||
--hash=sha256:da95af8214998d77a98cc14e3a3bd00aa191526343078b530ceb0bd710fb48a5 \
|
||||
--hash=sha256:dd398dbc6773384a17fe0d3e7eeb8d1a21c2200473ee6806bb5e6a8e62bb73dd \
|
||||
--hash=sha256:de2ea4b5833625383e464549fec1bc395c1bdeeb5f25c4a3a82b5a8c756ec22f \
|
||||
--hash=sha256:de55b766c7aa2e2a3092c51e0483d700341182f08e67c63630d5b6f200bb28e5 \
|
||||
--hash=sha256:df8b1c11f177bc2313ec4b2d46baec87a5f3e71fc8b45dab2ee7cae86d9aba14 \
|
||||
--hash=sha256:e03eab0a8677fa80d646b5ddece1cbeaf556c313dcfac435ba11f107ba117b5d \
|
||||
--hash=sha256:e221cf152cff04059d011ee126477f0d9588303eb57e88923578ace7baad17f9 \
|
||||
--hash=sha256:e31ae45bc2e29f6b2abd0de1cc3b9d5205aa847cafaecb8af1476a609a2f6eb7 \
|
||||
--hash=sha256:edae79245293e15384b51f88b00613ba9f7198016a5948b5dddf4917d4d26382 \
|
||||
--hash=sha256:f1e22e8c4419538cb197e4dd60acc919d7696e5ef98ee4da4e01d3f8cfa4cc5a \
|
||||
--hash=sha256:f3a2b4222ce6b60e2e8b337bb9596923045681d71e5a082783484d845390938e \
|
||||
--hash=sha256:f6a16c31041f09ead72d69f583767292f750d24913dadacf5756b966aacb3f1a \
|
||||
--hash=sha256:f75c7ab1f9e4aca5414ed4d8e5c0e303a34f4421f8a0d47a4d019ceff0ab6af4 \
|
||||
--hash=sha256:f79fc4fc25f1c8698ff97788206bb3c2598949bfe0fef03d299eb1b5356ada99 \
|
||||
--hash=sha256:f7f5baafcc48261359e14bcd6d9bff6d4b28d9103847c9e136694cb0501aef87 \
|
||||
--hash=sha256:fc48c783f9c87e60831201f2cce7f3b2e4846bf4d8728eabe54d60700b318a0b
|
||||
cffi==2.0.0 \
|
||||
--hash=sha256:00bdf7acc5f795150faa6957054fbbca2439db2f775ce831222b66f192f03beb \
|
||||
--hash=sha256:07b271772c100085dd28b74fa0cd81c8fb1a3ba18b21e03d7c27f3436a10606b \
|
||||
--hash=sha256:087067fa8953339c723661eda6b54bc98c5625757ea62e95eb4898ad5e776e9f \
|
||||
--hash=sha256:0a1527a803f0a659de1af2e1fd700213caba79377e27e4693648c2923da066f9 \
|
||||
--hash=sha256:0cf2d91ecc3fcc0625c2c530fe004f82c110405f101548512cce44322fa8ac44 \
|
||||
--hash=sha256:0f6084a0ea23d05d20c3edcda20c3d006f9b6f3fefeac38f59262e10cef47ee2 \
|
||||
--hash=sha256:12873ca6cb9b0f0d3a0da705d6086fe911591737a59f28b7936bdfed27c0d47c \
|
||||
--hash=sha256:19f705ada2530c1167abacb171925dd886168931e0a7b78f5bffcae5c6b5be75 \
|
||||
--hash=sha256:1cd13c99ce269b3ed80b417dcd591415d3372bcac067009b6e0f59c7d4015e65 \
|
||||
--hash=sha256:1e3a615586f05fc4065a8b22b8152f0c1b00cdbc60596d187c2a74f9e3036e4e \
|
||||
--hash=sha256:1f72fb8906754ac8a2cc3f9f5aaa298070652a0ffae577e0ea9bd480dc3c931a \
|
||||
--hash=sha256:1fc9ea04857caf665289b7a75923f2c6ed559b8298a1b8c49e59f7dd95c8481e \
|
||||
--hash=sha256:203a48d1fb583fc7d78a4c6655692963b860a417c0528492a6bc21f1aaefab25 \
|
||||
--hash=sha256:2081580ebb843f759b9f617314a24ed5738c51d2aee65d31e02f6f7a2b97707a \
|
||||
--hash=sha256:21d1152871b019407d8ac3985f6775c079416c282e431a4da6afe7aefd2bccbe \
|
||||
--hash=sha256:24b6f81f1983e6df8db3adc38562c83f7d4a0c36162885ec7f7b77c7dcbec97b \
|
||||
--hash=sha256:256f80b80ca3853f90c21b23ee78cd008713787b1b1e93eae9f3d6a7134abd91 \
|
||||
--hash=sha256:28a3a209b96630bca57cce802da70c266eb08c6e97e5afd61a75611ee6c64592 \
|
||||
--hash=sha256:2c8f814d84194c9ea681642fd164267891702542f028a15fc97d4674b6206187 \
|
||||
--hash=sha256:2de9a304e27f7596cd03d16f1b7c72219bd944e99cc52b84d0145aefb07cbd3c \
|
||||
--hash=sha256:38100abb9d1b1435bc4cc340bb4489635dc2f0da7456590877030c9b3d40b0c1 \
|
||||
--hash=sha256:3925dd22fa2b7699ed2617149842d2e6adde22b262fcbfada50e3d195e4b3a94 \
|
||||
--hash=sha256:3e17ed538242334bf70832644a32a7aae3d83b57567f9fd60a26257e992b79ba \
|
||||
--hash=sha256:3e837e369566884707ddaf85fc1744b47575005c0a229de3327f8f9a20f4efeb \
|
||||
--hash=sha256:3f4d46d8b35698056ec29bca21546e1551a205058ae1a181d871e278b0b28165 \
|
||||
--hash=sha256:44d1b5909021139fe36001ae048dbdde8214afa20200eda0f64c068cac5d5529 \
|
||||
--hash=sha256:45d5e886156860dc35862657e1494b9bae8dfa63bf56796f2fb56e1679fc0bca \
|
||||
--hash=sha256:4647afc2f90d1ddd33441e5b0e85b16b12ddec4fca55f0d9671fef036ecca27c \
|
||||
--hash=sha256:4671d9dd5ec934cb9a73e7ee9676f9362aba54f7f34910956b84d727b0d73fb6 \
|
||||
--hash=sha256:53f77cbe57044e88bbd5ed26ac1d0514d2acf0591dd6bb02a3ae37f76811b80c \
|
||||
--hash=sha256:5eda85d6d1879e692d546a078b44251cdd08dd1cfb98dfb77b670c97cee49ea0 \
|
||||
--hash=sha256:5fed36fccc0612a53f1d4d9a816b50a36702c28a2aa880cb8a122b3466638743 \
|
||||
--hash=sha256:61d028e90346df14fedc3d1e5441df818d095f3b87d286825dfcbd6459b7ef63 \
|
||||
--hash=sha256:66f011380d0e49ed280c789fbd08ff0d40968ee7b665575489afa95c98196ab5 \
|
||||
--hash=sha256:6824f87845e3396029f3820c206e459ccc91760e8fa24422f8b0c3d1731cbec5 \
|
||||
--hash=sha256:6c6c373cfc5c83a975506110d17457138c8c63016b563cc9ed6e056a82f13ce4 \
|
||||
--hash=sha256:6d02d6655b0e54f54c4ef0b94eb6be0607b70853c45ce98bd278dc7de718be5d \
|
||||
--hash=sha256:6d50360be4546678fc1b79ffe7a66265e28667840010348dd69a314145807a1b \
|
||||
--hash=sha256:730cacb21e1bdff3ce90babf007d0a0917cc3e6492f336c2f0134101e0944f93 \
|
||||
--hash=sha256:737fe7d37e1a1bffe70bd5754ea763a62a066dc5913ca57e957824b72a85e205 \
|
||||
--hash=sha256:74a03b9698e198d47562765773b4a8309919089150a0bb17d829ad7b44b60d27 \
|
||||
--hash=sha256:7553fb2090d71822f02c629afe6042c299edf91ba1bf94951165613553984512 \
|
||||
--hash=sha256:7a66c7204d8869299919db4d5069a82f1561581af12b11b3c9f48c584eb8743d \
|
||||
--hash=sha256:7cc09976e8b56f8cebd752f7113ad07752461f48a58cbba644139015ac24954c \
|
||||
--hash=sha256:81afed14892743bbe14dacb9e36d9e0e504cd204e0b165062c488942b9718037 \
|
||||
--hash=sha256:8941aaadaf67246224cee8c3803777eed332a19d909b47e29c9842ef1e79ac26 \
|
||||
--hash=sha256:89472c9762729b5ae1ad974b777416bfda4ac5642423fa93bd57a09204712322 \
|
||||
--hash=sha256:8ea985900c5c95ce9db1745f7933eeef5d314f0565b27625d9a10ec9881e1bfb \
|
||||
--hash=sha256:8eca2a813c1cb7ad4fb74d368c2ffbbb4789d377ee5bb8df98373c2cc0dee76c \
|
||||
--hash=sha256:92b68146a71df78564e4ef48af17551a5ddd142e5190cdf2c5624d0c3ff5b2e8 \
|
||||
--hash=sha256:9332088d75dc3241c702d852d4671613136d90fa6881da7d770a483fd05248b4 \
|
||||
--hash=sha256:94698a9c5f91f9d138526b48fe26a199609544591f859c870d477351dc7b2414 \
|
||||
--hash=sha256:9a67fc9e8eb39039280526379fb3a70023d77caec1852002b4da7e8b270c4dd9 \
|
||||
--hash=sha256:9de40a7b0323d889cf8d23d1ef214f565ab154443c42737dfe52ff82cf857664 \
|
||||
--hash=sha256:a05d0c237b3349096d3981b727493e22147f934b20f6f125a3eba8f994bec4a9 \
|
||||
--hash=sha256:afb8db5439b81cf9c9d0c80404b60c3cc9c3add93e114dcae767f1477cb53775 \
|
||||
--hash=sha256:b18a3ed7d5b3bd8d9ef7a8cb226502c6bf8308df1525e1cc676c3680e7176739 \
|
||||
--hash=sha256:b1e74d11748e7e98e2f426ab176d4ed720a64412b6a15054378afdb71e0f37dc \
|
||||
--hash=sha256:b21e08af67b8a103c71a250401c78d5e0893beff75e28c53c98f4de42f774062 \
|
||||
--hash=sha256:b4c854ef3adc177950a8dfc81a86f5115d2abd545751a304c5bcf2c2c7283cfe \
|
||||
--hash=sha256:b882b3df248017dba09d6b16defe9b5c407fe32fc7c65a9c69798e6175601be9 \
|
||||
--hash=sha256:baf5215e0ab74c16e2dd324e8ec067ef59e41125d3eade2b863d294fd5035c92 \
|
||||
--hash=sha256:c649e3a33450ec82378822b3dad03cc228b8f5963c0c12fc3b1e0ab940f768a5 \
|
||||
--hash=sha256:c654de545946e0db659b3400168c9ad31b5d29593291482c43e3564effbcee13 \
|
||||
--hash=sha256:c6638687455baf640e37344fe26d37c404db8b80d037c3d29f58fe8d1c3b194d \
|
||||
--hash=sha256:c8d3b5532fc71b7a77c09192b4a5a200ea992702734a2e9279a37f2478236f26 \
|
||||
--hash=sha256:cb527a79772e5ef98fb1d700678fe031e353e765d1ca2d409c92263c6d43e09f \
|
||||
--hash=sha256:cf364028c016c03078a23b503f02058f1814320a56ad535686f90565636a9495 \
|
||||
--hash=sha256:d48a880098c96020b02d5a1f7d9251308510ce8858940e6fa99ece33f610838b \
|
||||
--hash=sha256:d68b6cef7827e8641e8ef16f4494edda8b36104d79773a334beaa1e3521430f6 \
|
||||
--hash=sha256:d9b29c1f0ae438d5ee9acb31cadee00a58c46cc9c0b2f9038c6b0b3470877a8c \
|
||||
--hash=sha256:d9b97165e8aed9272a6bb17c01e3cc5871a594a446ebedc996e2397a1c1ea8ef \
|
||||
--hash=sha256:da68248800ad6320861f129cd9c1bf96ca849a2771a59e0344e88681905916f5 \
|
||||
--hash=sha256:da902562c3e9c550df360bfa53c035b2f241fed6d9aef119048073680ace4a18 \
|
||||
--hash=sha256:dbd5c7a25a7cb98f5ca55d258b103a2054f859a46ae11aaf23134f9cc0d356ad \
|
||||
--hash=sha256:dd4f05f54a52fb558f1ba9f528228066954fee3ebe629fc1660d874d040ae5a3 \
|
||||
--hash=sha256:de8dad4425a6ca6e4e5e297b27b5c824ecc7581910bf9aee86cb6835e6812aa7 \
|
||||
--hash=sha256:e11e82b744887154b182fd3e7e8512418446501191994dbf9c9fc1f32cc8efd5 \
|
||||
--hash=sha256:e6e73b9e02893c764e7e8d5bb5ce277f1a009cd5243f8228f75f842bf937c534 \
|
||||
--hash=sha256:f73b96c41e3b2adedc34a7356e64c8eb96e03a3782b535e043a986276ce12a49 \
|
||||
--hash=sha256:f93fd8e5c8c0a4aa1f424d6173f14a892044054871c771f8566e4008eaa359d2 \
|
||||
--hash=sha256:fc33c5141b55ed366cfaad382df24fe7dcbc686de5be719b207bb248e3053dc5 \
|
||||
--hash=sha256:fc7de24befaeae77ba923797c7c87834c73648a05a4bde34b3b7e5588973a453 \
|
||||
--hash=sha256:fe562eb1a64e67dd297ccc4f5addea2501664954f2692b69a76449ec7913ecbf
|
||||
# via cryptography
|
||||
cryptography==45.0.7 \
|
||||
--hash=sha256:06ce84dc14df0bf6ea84666f958e6080cdb6fe1231be2a51f3fc1267d9f3fb34 \
|
||||
|
|
@ -314,9 +331,9 @@ psycopg-pool==3.2.6 \
|
|||
--hash=sha256:0f92a7817719517212fbfe2fd58b8c35c1850cdd2a80d36b581ba2085d9148e5 \
|
||||
--hash=sha256:5887318a9f6af906d041a0b1dc1c60f8f0dda8340c2572b74e10907b51ed5da7
|
||||
# via psycopg
|
||||
pycparser==2.22 \
|
||||
--hash=sha256:491c8be9c040f5390f5bf44a5b07752bd07f56edf992381b05c701439eec10f6 \
|
||||
--hash=sha256:c3702b6d3dd8c7abc1afa565d7e63d53a1d0bd86cdc24edd75470f4de499cfcc
|
||||
pycparser==2.23 \
|
||||
--hash=sha256:78816d4f24add8f10a06d6f05b4d424ad9e96cfebf68a4ddc99c65c0720d00c2 \
|
||||
--hash=sha256:e5c6e8d3fbad53479cab09ac03729e0a9faf2bee3db8208a550daf5af81a5934
|
||||
# via cffi
|
||||
pymysql==1.1.2 \
|
||||
--hash=sha256:4961d3e165614ae65014e361811a724e2044ad3ea3739de9903ae7c21f539f03 \
|
||||
|
|
|
|||
|
|
@ -77,6 +77,24 @@ else
|
|||
echo "Autoconf service check skipped (disabled by AUTOCONF_MODE setting)"
|
||||
fi
|
||||
|
||||
# Check API service status only if enabled
|
||||
if [ "${SERVICE_API:-yes}" = "yes" ]; then
|
||||
status=$(supervisorctl status "api" 2>/dev/null)
|
||||
if ! echo "$status" | grep -q "RUNNING"; then
|
||||
echo "Service api is not running: $status"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
/usr/share/bunkerweb/helpers/healthcheck-api.sh
|
||||
# shellcheck disable=SC2181
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "API health check failed"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
echo "API service check skipped (disabled by SERVICE_API setting)"
|
||||
fi
|
||||
|
||||
# Everything is fine
|
||||
echo "All enabled services are healthy"
|
||||
exit 0
|
||||
|
|
|
|||
15
src/common/helpers/healthcheck-api.sh
Normal file
15
src/common/helpers/healthcheck-api.sh
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
#!/bin/bash
|
||||
|
||||
# API healthcheck: inspired by other service healthchecks
|
||||
|
||||
# Ensure API process is running
|
||||
if [ ! -f /var/run/bunkerweb/api.pid ] ; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Ensure service reported healthy
|
||||
if [ ! -f /var/tmp/bunkerweb/api.healthy ] ; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
exit 0
|
||||
|
|
@ -40,7 +40,7 @@ pip install --no-cache-dir --require-hashes -r requirements-deps.txt
|
|||
|
||||
echo "Updating python requirements files"
|
||||
|
||||
files=("requirements.in" "../autoconf/requirements.in" "../scheduler/requirements.in" "../ui/requirements.in")
|
||||
files=("requirements.in" "../api/requirements.in" "../autoconf/requirements.in" "../scheduler/requirements.in" "../ui/requirements.in")
|
||||
|
||||
shopt -s globstar
|
||||
for file in ../{common,../{docs,misc}}/**/requirements*.in
|
||||
|
|
|
|||
21
src/linux/bunkerweb-api.service
Normal file
21
src/linux/bunkerweb-api.service
Normal file
|
|
@ -0,0 +1,21 @@
|
|||
[Unit]
|
||||
Description=BunkerWeb API service
|
||||
Documentation=https://docs.bunkerweb.io
|
||||
After=bunkerweb.service
|
||||
|
||||
[Service]
|
||||
Restart=always
|
||||
RestartSec=5
|
||||
User=root
|
||||
UMask=027
|
||||
PIDFile=/var/run/bunkerweb/api.pid
|
||||
ExecStart=/usr/share/bunkerweb/scripts/bunkerweb-api.sh start
|
||||
ExecStop=/usr/share/bunkerweb/scripts/bunkerweb-api.sh stop
|
||||
ExecReload=/usr/share/bunkerweb/scripts/bunkerweb-api.sh reload
|
||||
Type=simple
|
||||
StandardOutput=journal+console
|
||||
StandardError=journal+console
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
Alias=bunkerweb-api.service
|
||||
|
|
@ -10,4 +10,4 @@
|
|||
--before-install /usr/share/bunkerweb/scripts/beforeInstall.sh
|
||||
--after-install /usr/share/bunkerweb/scripts/postinstall.sh
|
||||
--after-remove /usr/share/bunkerweb/scripts/afterRemoveDEB.sh
|
||||
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/log/bunkerweb/=/var/log/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /lib/systemd/system/bunkerweb-scheduler.service=/lib/systemd/system/bunkerweb-scheduler.service /var/lib/bunkerweb/=/var/lib/bunkerweb /etc/logrotate.d/bunkerweb=/etc/logrotate.d/bunkerweb
|
||||
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/log/bunkerweb/=/var/log/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /lib/systemd/system/bunkerweb-scheduler.service=/lib/systemd/system/bunkerweb-scheduler.service /lib/systemd/system/bunkerweb-api.service=/lib/systemd/system/bunkerweb-api.service /var/lib/bunkerweb/=/var/lib/bunkerweb /etc/logrotate.d/bunkerweb=/etc/logrotate.d/bunkerweb
|
||||
|
|
|
|||
|
|
@ -10,4 +10,4 @@
|
|||
--before-install /usr/share/bunkerweb/scripts/beforeInstall.sh
|
||||
--after-install /usr/share/bunkerweb/scripts/postinstall.sh
|
||||
--after-remove /usr/share/bunkerweb/scripts/afterRemoveDEB.sh
|
||||
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/log/bunkerweb/=/var/log/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /lib/systemd/system/bunkerweb-scheduler.service=/lib/systemd/system/bunkerweb-scheduler.service /var/lib/bunkerweb/=/var/lib/bunkerweb /etc/logrotate.d/bunkerweb=/etc/logrotate.d/bunkerweb
|
||||
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/log/bunkerweb/=/var/log/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /lib/systemd/system/bunkerweb-scheduler.service=/lib/systemd/system/bunkerweb-scheduler.service /lib/systemd/system/bunkerweb-api.service=/lib/systemd/system/bunkerweb-api.service /var/lib/bunkerweb/=/var/lib/bunkerweb /etc/logrotate.d/bunkerweb=/etc/logrotate.d/bunkerweb
|
||||
|
|
|
|||
|
|
@ -10,4 +10,4 @@
|
|||
--before-install /usr/share/bunkerweb/scripts/beforeInstall.sh
|
||||
--after-install /usr/share/bunkerweb/scripts/postinstall.sh
|
||||
--after-remove /usr/share/bunkerweb/scripts/afterRemoveRPM.sh
|
||||
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/log/bunkerweb/=/var/log/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /lib/systemd/system/bunkerweb-scheduler.service=/lib/systemd/system/bunkerweb-scheduler.service /var/lib/bunkerweb/=/var/lib/bunkerweb /etc/logrotate.d/bunkerweb=/etc/logrotate.d/bunkerweb
|
||||
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/log/bunkerweb/=/var/log/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /lib/systemd/system/bunkerweb-scheduler.service=/lib/systemd/system/bunkerweb-scheduler.service /lib/systemd/system/bunkerweb-api.service=/lib/systemd/system/bunkerweb-api.service /var/lib/bunkerweb/=/var/lib/bunkerweb /etc/logrotate.d/bunkerweb=/etc/logrotate.d/bunkerweb
|
||||
|
|
|
|||
|
|
@ -10,4 +10,4 @@
|
|||
--before-install /usr/share/bunkerweb/scripts/beforeInstall.sh
|
||||
--after-install /usr/share/bunkerweb/scripts/postinstall.sh
|
||||
--after-remove /usr/share/bunkerweb/scripts/afterRemoveRPM.sh
|
||||
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/log/bunkerweb/=/var/log/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /lib/systemd/system/bunkerweb-scheduler.service=/lib/systemd/system/bunkerweb-scheduler.service /var/lib/bunkerweb/=/var/lib/bunkerweb /etc/logrotate.d/bunkerweb=/etc/logrotate.d/bunkerweb
|
||||
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/log/bunkerweb/=/var/log/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /lib/systemd/system/bunkerweb-scheduler.service=/lib/systemd/system/bunkerweb-scheduler.service /lib/systemd/system/bunkerweb-api.service=/lib/systemd/system/bunkerweb-api.service /var/lib/bunkerweb/=/var/lib/bunkerweb /etc/logrotate.d/bunkerweb=/etc/logrotate.d/bunkerweb
|
||||
|
|
|
|||
|
|
@ -10,4 +10,4 @@
|
|||
--before-install /usr/share/bunkerweb/scripts/beforeInstall.sh
|
||||
--after-install /usr/share/bunkerweb/scripts/postinstall.sh
|
||||
--after-remove /usr/share/bunkerweb/scripts/afterRemoveRPM.sh
|
||||
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/log/bunkerweb/=/var/log/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /lib/systemd/system/bunkerweb-scheduler.service=/lib/systemd/system/bunkerweb-scheduler.service /var/lib/bunkerweb/=/var/lib/bunkerweb /etc/logrotate.d/bunkerweb=/etc/logrotate.d/bunkerweb
|
||||
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/log/bunkerweb/=/var/log/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /lib/systemd/system/bunkerweb-scheduler.service=/lib/systemd/system/bunkerweb-scheduler.service /lib/systemd/system/bunkerweb-api.service=/lib/systemd/system/bunkerweb-api.service /var/lib/bunkerweb/=/var/lib/bunkerweb /etc/logrotate.d/bunkerweb=/etc/logrotate.d/bunkerweb
|
||||
|
|
|
|||
|
|
@ -10,4 +10,4 @@
|
|||
--before-install /usr/share/bunkerweb/scripts/beforeInstall.sh
|
||||
--after-install /usr/share/bunkerweb/scripts/postinstall.sh
|
||||
--after-remove /usr/share/bunkerweb/scripts/afterRemoveRPM.sh
|
||||
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/log/bunkerweb/=/var/log/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /lib/systemd/system/bunkerweb-scheduler.service=/lib/systemd/system/bunkerweb-scheduler.service /var/lib/bunkerweb/=/var/lib/bunkerweb /etc/logrotate.d/bunkerweb=/etc/logrotate.d/bunkerweb
|
||||
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/log/bunkerweb/=/var/log/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /lib/systemd/system/bunkerweb-scheduler.service=/lib/systemd/system/bunkerweb-scheduler.service /lib/systemd/system/bunkerweb-api.service=/lib/systemd/system/bunkerweb-api.service /var/lib/bunkerweb/=/var/lib/bunkerweb /etc/logrotate.d/bunkerweb=/etc/logrotate.d/bunkerweb
|
||||
|
|
|
|||
|
|
@ -10,4 +10,4 @@
|
|||
--before-install /usr/share/bunkerweb/scripts/beforeInstall.sh
|
||||
--after-install /usr/share/bunkerweb/scripts/postinstall.sh
|
||||
--after-remove /usr/share/bunkerweb/scripts/afterRemoveRPM.sh
|
||||
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/log/bunkerweb/=/var/log/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /lib/systemd/system/bunkerweb-scheduler.service=/lib/systemd/system/bunkerweb-scheduler.service /var/lib/bunkerweb/=/var/lib/bunkerweb /etc/logrotate.d/bunkerweb=/etc/logrotate.d/bunkerweb
|
||||
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/log/bunkerweb/=/var/log/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /lib/systemd/system/bunkerweb-scheduler.service=/lib/systemd/system/bunkerweb-scheduler.service /lib/systemd/system/bunkerweb-api.service=/lib/systemd/system/bunkerweb-api.service /var/lib/bunkerweb/=/var/lib/bunkerweb /etc/logrotate.d/bunkerweb=/etc/logrotate.d/bunkerweb
|
||||
|
|
|
|||
|
|
@ -11,4 +11,4 @@
|
|||
--after-install /usr/share/bunkerweb/scripts/postinstall.sh
|
||||
--after-remove /usr/share/bunkerweb/scripts/afterRemoveDEB.sh
|
||||
--deb-no-default-config-files
|
||||
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/log/bunkerweb/=/var/log/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /lib/systemd/system/bunkerweb-scheduler.service=/lib/systemd/system/bunkerweb-scheduler.service /var/lib/bunkerweb/=/var/lib/bunkerweb /etc/logrotate.d/bunkerweb=/etc/logrotate.d/bunkerweb
|
||||
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/log/bunkerweb/=/var/log/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /lib/systemd/system/bunkerweb-scheduler.service=/lib/systemd/system/bunkerweb-scheduler.service /lib/systemd/system/bunkerweb-api.service=/lib/systemd/system/bunkerweb-api.service /var/lib/bunkerweb/=/var/lib/bunkerweb /etc/logrotate.d/bunkerweb=/etc/logrotate.d/bunkerweb
|
||||
|
|
|
|||
|
|
@ -11,4 +11,4 @@
|
|||
--after-install /usr/share/bunkerweb/scripts/postinstall.sh
|
||||
--after-remove /usr/share/bunkerweb/scripts/afterRemoveDEB.sh
|
||||
--deb-no-default-config-files
|
||||
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/log/bunkerweb/=/var/log/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /lib/systemd/system/bunkerweb-scheduler.service=/lib/systemd/system/bunkerweb-scheduler.service /var/lib/bunkerweb/=/var/lib/bunkerweb /etc/logrotate.d/bunkerweb=/etc/logrotate.d/bunkerweb
|
||||
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/log/bunkerweb/=/var/log/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /lib/systemd/system/bunkerweb-scheduler.service=/lib/systemd/system/bunkerweb-scheduler.service /lib/systemd/system/bunkerweb-api.service=/lib/systemd/system/bunkerweb-api.service /var/lib/bunkerweb/=/var/lib/bunkerweb /etc/logrotate.d/bunkerweb=/etc/logrotate.d/bunkerweb
|
||||
|
|
|
|||
252
src/linux/scripts/bunkerweb-api.sh
Normal file
252
src/linux/scripts/bunkerweb-api.sh
Normal file
|
|
@ -0,0 +1,252 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Enforce a restrictive default umask for all operations
|
||||
umask 027
|
||||
|
||||
export PYTHONPATH=/usr/share/bunkerweb/deps/python:/usr/share/bunkerweb/api
|
||||
|
||||
API_PID_FILE=/var/run/bunkerweb/api.pid
|
||||
|
||||
function get_env_var() {
|
||||
local var_name=$1
|
||||
local default_value=$2
|
||||
local value
|
||||
|
||||
# First try api.env
|
||||
value=$(grep "^${var_name}=" /etc/bunkerweb/api.env 2>/dev/null | cut -d '=' -f 2)
|
||||
|
||||
# If not found, try variables.env
|
||||
if [ -z "$value" ] && [ -f /etc/bunkerweb/variables.env ]; then
|
||||
value=$(grep "^${var_name}=" /etc/bunkerweb/variables.env 2>/dev/null | cut -d '=' -f 2)
|
||||
fi
|
||||
|
||||
# Return default if still not found
|
||||
if [ -z "$value" ]; then
|
||||
echo "$default_value"
|
||||
else
|
||||
echo "$value"
|
||||
fi
|
||||
}
|
||||
|
||||
start() {
|
||||
stop
|
||||
|
||||
echo "Starting API"
|
||||
|
||||
# Create api.env with defaults if missing
|
||||
if [ ! -f /etc/bunkerweb/api.env ]; then
|
||||
{
|
||||
echo "# =============================="
|
||||
echo "# BunkerWeb API Configuration"
|
||||
echo "# This file lists all supported API environment variables with their defaults."
|
||||
echo "# Uncomment and adjust as needed. Lines starting with # are ignored."
|
||||
echo "# =============================="
|
||||
echo
|
||||
echo "# --- Network & Proxy ---"
|
||||
echo "# Listen address/port for the API"
|
||||
echo "# API_LISTEN_ADDR=0.0.0.0 # synonym: LISTEN_ADDR"
|
||||
echo "# API_LISTEN_PORT=8888 # synonym: LISTEN_PORT"
|
||||
echo "# Trusted proxy IPs for X-Forwarded-* headers (comma-separated)."
|
||||
echo "# Default is restricted to loopback for security."
|
||||
echo "API_FORWARDED_ALLOW_IPS=127.0.0.1"
|
||||
echo
|
||||
echo "# --- Logging & Runtime ---"
|
||||
echo "# LOG_LEVEL affects most components; CUSTOM_LOG_LEVEL overrides when provided."
|
||||
echo "# LOG_LEVEL=info"
|
||||
echo "# CUSTOM_LOG_LEVEL=info"
|
||||
echo "# Capture Gunicorn/uvicorn output to files instead of stdout/stderr."
|
||||
echo "CAPTURE_OUTPUT=yes"
|
||||
echo "# Number of workers/threads (auto if unset)."
|
||||
echo "# MAX_WORKERS=<auto>"
|
||||
echo "# MAX_THREADS=<auto>"
|
||||
echo
|
||||
echo "# --- Authentication & Authorization ---"
|
||||
echo "# Optional admin Bearer token (grants full access when provided)."
|
||||
echo "API_TOKEN=changeme"
|
||||
echo "# Bootstrap admin user (created/validated on startup if provided)."
|
||||
echo "# API_USERNAME="
|
||||
echo "# API_PASSWORD="
|
||||
echo "# Force re-applying bootstrap admin credentials on startup (use with care)."
|
||||
echo "# OVERRIDE_API_CREDS=no"
|
||||
echo "# Fine-grained ACLs can be enabled/disabled here."
|
||||
echo "# API_ACL_BOOTSTRAP_FILE="
|
||||
echo
|
||||
echo "# --- IP allowlist ---"
|
||||
echo "# Enable and shape inbound IP allowlist for the API."
|
||||
echo "# API_WHITELIST_ENABLED=yes"
|
||||
echo "API_WHITELIST_IPS=127.0.0.1"
|
||||
echo
|
||||
echo "# --- FastAPI surface ---"
|
||||
echo "# Customize or disable documentation endpoints. Use 'disabled' to turn off."
|
||||
echo "# API_TITLE=BunkerWeb API"
|
||||
echo "# API_DOCS_URL=/docs"
|
||||
echo "# API_REDOC_URL=/redoc"
|
||||
echo "# API_OPENAPI_URL=/openapi.json"
|
||||
echo "# Mount the API under a subpath (useful behind reverse proxies)."
|
||||
echo "# API_ROOT_PATH="
|
||||
echo
|
||||
echo "# --- TLS/SSL ---"
|
||||
echo "# Enable TLS for the API listener (requires cert and key)."
|
||||
echo "# API_SSL_ENABLED=no"
|
||||
echo "# Path to PEM-encoded certificate and private key."
|
||||
echo "# API_SSL_CERTFILE=/etc/ssl/certs/bunkerweb-api.crt"
|
||||
echo "# API_SSL_KEYFILE=/etc/ssl/private/bunkerweb-api.key"
|
||||
echo "# Optional chain/CA bundle and cipher suite."
|
||||
echo "# API_SSL_CA_CERTS="
|
||||
echo "# API_SSL_CIPHERS_CUSTOM="
|
||||
echo "# API_SSL_CIPHERS_LEVEL=modern # choices: modern|intermediate"
|
||||
echo
|
||||
echo "# --- Biscuit keys & policy ---"
|
||||
echo "# Bind token to client IP (except private ranges)."
|
||||
echo "# CHECK_PRIVATE_IP=yes"
|
||||
echo "# Biscuit token lifetime in seconds (0 disables expiry)."
|
||||
echo "# API_BISCUIT_TTL_SECONDS=3600"
|
||||
echo "# Provide Biscuit keys via env (hex) instead of files."
|
||||
echo "# BISCUIT_PUBLIC_KEY="
|
||||
echo "# BISCUIT_PRIVATE_KEY="
|
||||
echo
|
||||
echo "# --- Rate limiting ---"
|
||||
echo "# Enable/disable and shape rate limiting."
|
||||
echo "# API_RATE_LIMIT_ENABLED=yes"
|
||||
echo "# API_RATE_LIMIT_HEADERS_ENABLED=yes"
|
||||
echo "# Global default limit (times per seconds)."
|
||||
echo "# API_RATE_LIMIT_TIMES=100"
|
||||
echo "# API_RATE_LIMIT_SECONDS=60"
|
||||
echo "# Authentication endpoint limit."
|
||||
echo "# API_RATE_LIMIT_AUTH_TIMES=10"
|
||||
echo "# API_RATE_LIMIT_AUTH_SECONDS=60"
|
||||
echo "# Advanced limits and rules (CSV/JSON/YAML)."
|
||||
echo "# API_RATE_LIMIT_DEFAULTS=\"200/minute\""
|
||||
echo "# API_RATE_LIMIT_APPLICATION_LIMITS="
|
||||
echo "# API_RATE_LIMIT_RULES="
|
||||
echo "# Strategy: fixed-window | moving-window | sliding-window-counter"
|
||||
echo "# API_RATE_LIMIT_STRATEGY=fixed-window"
|
||||
echo "# Key selector: ip | user | path | method | header:<Name>"
|
||||
echo "# API_RATE_LIMIT_KEY=ip"
|
||||
echo "# Exempt IPs (space or comma-separated CIDRs)."
|
||||
echo "# API_RATE_LIMIT_EXEMPT_IPS="
|
||||
echo "# Storage options in JSON (merged with Redis settings if USE_REDIS=yes)."
|
||||
echo "# API_RATE_LIMIT_STORAGE_OPTIONS="
|
||||
echo
|
||||
echo "# --- Redis (optional, for rate limiting storage) ---"
|
||||
echo "# USE_REDIS=no"
|
||||
echo "# REDIS_HOST="
|
||||
echo "# REDIS_PORT=6379"
|
||||
echo "# REDIS_DATABASE=0"
|
||||
echo "# REDIS_USERNAME="
|
||||
echo "# REDIS_PASSWORD="
|
||||
echo "# REDIS_SSL=no"
|
||||
echo "# REDIS_SSL_VERIFY=yes"
|
||||
echo "# REDIS_TIMEOUT=1000"
|
||||
echo "# REDIS_KEEPALIVE_POOL=10"
|
||||
echo "# REDIS_SENTINEL_HOSTS=sentinel1:26379 sentinel2:26379"
|
||||
echo "# REDIS_SENTINEL_MASTER=mymaster"
|
||||
echo "# REDIS_SENTINEL_USERNAME="
|
||||
echo "# REDIS_SENTINEL_PASSWORD="
|
||||
} > /etc/bunkerweb/api.env
|
||||
chown root:nginx /etc/bunkerweb/api.env
|
||||
chmod 660 /etc/bunkerweb/api.env
|
||||
fi
|
||||
|
||||
if [ ! -f /etc/bunkerweb/api.yml ]; then
|
||||
touch /etc/bunkerweb/api.yml
|
||||
fi
|
||||
|
||||
# Extract environment variables with fallback
|
||||
LISTEN_ADDR=$(get_env_var "API_LISTEN_ADDR" "")
|
||||
if [ -z "$LISTEN_ADDR" ]; then
|
||||
LISTEN_ADDR=$(get_env_var "LISTEN_ADDR" "127.0.0.1")
|
||||
fi
|
||||
export LISTEN_ADDR
|
||||
|
||||
LISTEN_PORT=$(get_env_var "API_LISTEN_PORT" "")
|
||||
if [ -z "$LISTEN_PORT" ]; then
|
||||
LISTEN_PORT=$(get_env_var "LISTEN_PORT" "8888")
|
||||
fi
|
||||
export LISTEN_PORT
|
||||
|
||||
FORWARDED_ALLOW_IPS=$(get_env_var "API_FORWARDED_ALLOW_IPS" "")
|
||||
if [ -z "$FORWARDED_ALLOW_IPS" ]; then
|
||||
FORWARDED_ALLOW_IPS=$(get_env_var "FORWARDED_ALLOW_IPS" "127.0.0.1")
|
||||
fi
|
||||
export FORWARDED_ALLOW_IPS
|
||||
|
||||
API_WHITELIST_IPS=$(get_env_var "API_WHITELIST_IPS" "")
|
||||
if [ -z "$API_WHITELIST_IPS" ]; then
|
||||
API_WHITELIST_IPS=$(get_env_var "WHITELIST_IPS" "127.0.0.1")
|
||||
fi
|
||||
export API_WHITELIST_IPS
|
||||
|
||||
export CAPTURE_OUTPUT="yes"
|
||||
|
||||
# Export variables from variables.env
|
||||
if [ -f /etc/bunkerweb/variables.env ]; then
|
||||
while IFS='=' read -r key value; do
|
||||
[[ -z "$key" || "$key" =~ ^# ]] && continue
|
||||
key=$(echo "$key" | xargs)
|
||||
export "$key=$value"
|
||||
done < /etc/bunkerweb/variables.env
|
||||
fi
|
||||
|
||||
# Export variables from api.env
|
||||
if [ -f /etc/bunkerweb/api.env ]; then
|
||||
while IFS='=' read -r key value; do
|
||||
[[ -z "$key" || "$key" =~ ^# ]] && continue
|
||||
key=$(echo "$key" | xargs)
|
||||
export "$key=$value"
|
||||
done < /etc/bunkerweb/api.env
|
||||
fi
|
||||
|
||||
mkdir -p /var/run/bunkerweb
|
||||
rm -f "$API_PID_FILE"
|
||||
sudo -E -u nginx -g nginx /bin/bash -c \
|
||||
"PYTHONPATH=$PYTHONPATH python3 -m gunicorn --logger-class utils.logger.APILogger --config utils/gunicorn.conf.py" &
|
||||
pid=$!
|
||||
echo "$pid" > "$API_PID_FILE"
|
||||
|
||||
# Wait on the process to keep unit active
|
||||
wait "$pid"
|
||||
}
|
||||
|
||||
stop() {
|
||||
echo "Stopping API service..."
|
||||
if [ -f "$API_PID_FILE" ]; then
|
||||
pid=$(cat "$API_PID_FILE")
|
||||
if kill -0 "$pid" 2>/dev/null; then
|
||||
kill -s TERM "$pid"
|
||||
fi
|
||||
# Wait up to 10s for graceful stop
|
||||
for _ in $(seq 1 10); do
|
||||
if kill -0 "$pid" 2>/dev/null; then
|
||||
sleep 1
|
||||
else
|
||||
break
|
||||
fi
|
||||
done
|
||||
rm -f "$API_PID_FILE"
|
||||
else
|
||||
echo "API service is not running or the pid file doesn't exist."
|
||||
fi
|
||||
}
|
||||
|
||||
reload() {
|
||||
echo "Reloading API service..."
|
||||
stop
|
||||
start
|
||||
}
|
||||
|
||||
case "$1" in
|
||||
start)
|
||||
start
|
||||
;;
|
||||
stop)
|
||||
stop
|
||||
;;
|
||||
reload)
|
||||
reload
|
||||
;;
|
||||
*)
|
||||
echo "Usage: $0 {start|stop|reload}"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
|
@ -42,9 +42,9 @@ start() {
|
|||
echo "ADMIN_PASSWORD="
|
||||
echo "# FLASK_SECRET=changeme"
|
||||
echo "# TOTP_ENCRYPTION_KEYS=changeme"
|
||||
echo "LISTEN_ADDR=127.0.0.1"
|
||||
echo "# LISTEN_PORT=7000"
|
||||
echo "FORWARDED_ALLOW_IPS=127.0.0.1"
|
||||
echo "UI_LISTEN_ADDR=127.0.0.1"
|
||||
echo "# UI_LISTEN_PORT=7000"
|
||||
echo "UI_FORWARDED_ALLOW_IPS=127.0.0.1"
|
||||
echo "# ENABLE_HEALTHCHECK=no"
|
||||
} > /etc/bunkerweb/ui.env
|
||||
chown root:nginx /etc/bunkerweb/ui.env
|
||||
|
|
@ -52,10 +52,16 @@ start() {
|
|||
fi
|
||||
|
||||
# Extract environment variables with fallback
|
||||
LISTEN_ADDR=$(get_env_var "LISTEN_ADDR" "127.0.0.1")
|
||||
LISTEN_ADDR=$(get_env_var "UI_LISTEN_ADDR" "")
|
||||
if [ -z "$LISTEN_ADDR" ]; then
|
||||
LISTEN_ADDR=$(get_env_var "LISTEN_ADDR" "127.0.0.1")
|
||||
fi
|
||||
export LISTEN_ADDR
|
||||
|
||||
FORWARDED_ALLOW_IPS=$(get_env_var "FORWARDED_ALLOW_IPS" "127.0.0.1")
|
||||
FORWARDED_ALLOW_IPS=$(get_env_var "UI_FORWARDED_ALLOW_IPS" "")
|
||||
if [ -z "$FORWARDED_ALLOW_IPS" ]; then
|
||||
FORWARDED_ALLOW_IPS=$(get_env_var "FORWARDED_ALLOW_IPS" "127.0.0.1")
|
||||
fi
|
||||
export FORWARDED_ALLOW_IPS
|
||||
|
||||
export CAPTURE_OUTPUT="yes"
|
||||
|
|
|
|||
|
|
@ -42,8 +42,8 @@ chmod 770 /var/cache/bunkerweb/ /var/tmp/bunkerweb/ /var/run/bunkerweb/
|
|||
# Ensure temp dir enforces group inheritance and no access for others
|
||||
chmod 2770 /var/tmp/bunkerweb/
|
||||
chmod 550 -R /usr/share/bunkerweb/
|
||||
find . \( -path './scheduler' -o -path './ui' -o -path './cli' -o -path './lua' -o -path './core' -o -path './db' -o -path './gen' -o -path './helpers' -o -path './scripts' -o -path './deps' \) -prune -o -type f -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440
|
||||
find scheduler/ ui/ cli/ lua/ core/ db/ gen/ helpers/ scripts/ deps/ -type f ! -path 'deps/python/bin/*' ! -name '*.lua' ! -name '*.py' ! -name '*.pyc' ! -name '*.sh' ! -name '*.so' -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440
|
||||
find . \( -path './api' -o -path './scheduler' -o -path './ui' -o -path './cli' -o -path './lua' -o -path './core' -o -path './db' -o -path './gen' -o -path './utils' -o -path './helpers' -o -path './scripts' -o -path './deps' \) -prune -o -type f -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440
|
||||
find api/ scheduler/ ui/ cli/ lua/ core/ db/ gen/ utils/ helpers/ scripts/ deps/ -type f ! -path 'deps/python/bin/*' ! -name '*.lua' ! -name '*.py' ! -name '*.pyc' ! -name '*.sh' ! -name '*.so' -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440
|
||||
chmod 770 -R db/alembic/
|
||||
|
||||
# Function to migrate files from old locations to new ones
|
||||
|
|
@ -220,6 +220,33 @@ else
|
|||
echo "ℹ️ BunkerWeb UI service is not enabled in the current configuration."
|
||||
fi
|
||||
|
||||
# Manage the BunkerWeb API service
|
||||
echo "Configuring BunkerWeb API service..."
|
||||
|
||||
# Enable API only when explicitly requested (via env or flag)
|
||||
if [ "${SERVICE_API:-no}" = "yes" ] || [ -f /var/tmp/bunkerweb_enable_api ]; then
|
||||
# Fresh installation or explicit API enablement
|
||||
if [ ! -f /var/tmp/bunkerweb_upgrade ]; then
|
||||
echo "🚀 Enabling and starting the BunkerWeb API service..."
|
||||
do_and_check_cmd systemctl enable --now bunkerweb-api
|
||||
# Clean up enable flag if present
|
||||
if [ -f /var/tmp/bunkerweb_enable_api ]; then
|
||||
rm -f /var/tmp/bunkerweb_enable_api
|
||||
fi
|
||||
else
|
||||
# Restart API only if already running
|
||||
if systemctl is-active --quiet bunkerweb-api; then
|
||||
echo "📋 Restarting the BunkerWeb API service after upgrade..."
|
||||
do_and_check_cmd systemctl restart bunkerweb-api
|
||||
fi
|
||||
fi
|
||||
elif systemctl is-active --quiet bunkerweb-api; then
|
||||
echo "🛑 Disabling and stopping the BunkerWeb API service..."
|
||||
do_and_check_cmd systemctl disable --now bunkerweb-api
|
||||
else
|
||||
echo "ℹ️ BunkerWeb API service is not enabled in the current configuration."
|
||||
fi
|
||||
|
||||
# Fetch CrowdSec config from /var/tmp/crowdsec.env and merge into variables.env if present
|
||||
if [ -f /var/tmp/crowdsec.env ] && [ -f /etc/bunkerweb/variables.env ]; then
|
||||
echo "Adding CrowdSec configuration from the easy-install script to /etc/bunkerweb/variables.env..."
|
||||
|
|
|
|||
|
|
@ -65,8 +65,8 @@ RUN cp helpers/bwcli /usr/bin/ && \
|
|||
chown -R root:scheduler INTEGRATION /data /etc/nginx /var/cache/bunkerweb /var/lib/bunkerweb /var/www/html /etc/bunkerweb /var/tmp/bunkerweb /var/run/bunkerweb /var/log/bunkerweb /usr/bin/bwcli && \
|
||||
chmod -R 770 /data /etc/nginx /var/cache/bunkerweb /var/lib/bunkerweb /var/www/html /etc/bunkerweb /var/tmp/bunkerweb /var/run/bunkerweb /var/log/bunkerweb && \
|
||||
chmod 2770 /var/tmp/bunkerweb && \
|
||||
find . \( -path './scheduler' -o -path './cli' -o -path './core' -o -path './db' -o -path './gen' -o -path './helpers' -o -path './deps' \) -prune -o -type f -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
find scheduler cli core db gen helpers deps -type f ! -path 'deps/python/bin/*' ! -name '*.py' ! -name '*.pyc' ! -name '*.sh' ! -name '*.so' -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
find . \( -path './scheduler' -o -path './cli' -o -path './core' -o -path './db' -o -path './gen' -o -path './utils' -o -path './helpers' -o -path './deps' \) -prune -o -type f -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
find scheduler cli core db gen utils helpers deps -type f ! -path 'deps/python/bin/*' ! -name '*.py' ! -name '*.pyc' ! -name '*.sh' ! -name '*.so' -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
chmod 770 -R db/alembic && \
|
||||
chmod 660 INTEGRATION
|
||||
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ certbot-dns-scaleway==0.0.7
|
|||
cryptography==45.0.7
|
||||
importlib-metadata==8.7.0
|
||||
maxminddb==2.8.2
|
||||
pydantic==2.11.7
|
||||
pydantic==2.11.9
|
||||
python-magic==0.4.27
|
||||
requests==2.32.5
|
||||
schedule==1.2.2
|
||||
|
|
|
|||
|
|
@ -39,13 +39,13 @@ beautifulsoup4==4.13.5 \
|
|||
--hash=sha256:5e70131382930e7c3de33450a2f54a63d5e4b19386eab43a5b34d594268f3695 \
|
||||
--hash=sha256:642085eaa22233aceadff9c69651bc51e8bf3f874fb6d7104ece2beb24b47c4a
|
||||
# via dns-lexicon
|
||||
boto3==1.40.24 \
|
||||
--hash=sha256:24a19e275d33e918afc22a78c6a1e20c14d02cc00e2f786b05e2a4a32191457e \
|
||||
--hash=sha256:cc147ad13e8edf7ec69cbb4df8fe60f187f8b2c9ab8befa0fd1fbcfa4fc80b1f
|
||||
boto3==1.40.30 \
|
||||
--hash=sha256:04e89abf61240857bf7dec160e22f097eec68c502509b2bb3c5010a22cb91052 \
|
||||
--hash=sha256:e95db539c938710917f4cb4fc5915f71b27f2c836d949a1a95df7895d2e9ec8b
|
||||
# via certbot-dns-route53
|
||||
botocore==1.40.24 \
|
||||
--hash=sha256:af2b49e52950a12229440d7c297aaad0a7b75fd1c4f8700b164948b207a08cf0 \
|
||||
--hash=sha256:d566840f2291bb5df1c0903ad385c61c865927d562d41dcf6468c9cee4cc313a
|
||||
botocore==1.40.30 \
|
||||
--hash=sha256:1d87874ad81234bec3e83f9de13618f67ccdfefd08d6b8babc041cd45007447e \
|
||||
--hash=sha256:8a74f77cfe5c519826d22f7613f89544cbb8491a1a49d965031bd997f89a8e3f
|
||||
# via
|
||||
# boto3
|
||||
# s3transfer
|
||||
|
|
@ -166,74 +166,91 @@ certifi==2025.8.3 \
|
|||
--hash=sha256:e564105f78ded564e3ae7c923924435e1daa7463faeab5bb932bc53ffae63407 \
|
||||
--hash=sha256:f6c12493cfb1b06ba2ff328595af9350c65d6644968e5d3a2ffd78699af217a5
|
||||
# via requests
|
||||
cffi==1.17.1 \
|
||||
--hash=sha256:045d61c734659cc045141be4bae381a41d89b741f795af1dd018bfb532fd0df8 \
|
||||
--hash=sha256:0984a4925a435b1da406122d4d7968dd861c1385afe3b45ba82b750f229811e2 \
|
||||
--hash=sha256:0e2b1fac190ae3ebfe37b979cc1ce69c81f4e4fe5746bb401dca63a9062cdaf1 \
|
||||
--hash=sha256:0f048dcf80db46f0098ccac01132761580d28e28bc0f78ae0d58048063317e15 \
|
||||
--hash=sha256:1257bdabf294dceb59f5e70c64a3e2f462c30c7ad68092d01bbbfb1c16b1ba36 \
|
||||
--hash=sha256:1c39c6016c32bc48dd54561950ebd6836e1670f2ae46128f67cf49e789c52824 \
|
||||
--hash=sha256:1d599671f396c4723d016dbddb72fe8e0397082b0a77a4fab8028923bec050e8 \
|
||||
--hash=sha256:28b16024becceed8c6dfbc75629e27788d8a3f9030691a1dbf9821a128b22c36 \
|
||||
--hash=sha256:2bb1a08b8008b281856e5971307cc386a8e9c5b625ac297e853d36da6efe9c17 \
|
||||
--hash=sha256:30c5e0cb5ae493c04c8b42916e52ca38079f1b235c2f8ae5f4527b963c401caf \
|
||||
--hash=sha256:31000ec67d4221a71bd3f67df918b1f88f676f1c3b535a7eb473255fdc0b83fc \
|
||||
--hash=sha256:386c8bf53c502fff58903061338ce4f4950cbdcb23e2902d86c0f722b786bbe3 \
|
||||
--hash=sha256:3edc8d958eb099c634dace3c7e16560ae474aa3803a5df240542b305d14e14ed \
|
||||
--hash=sha256:45398b671ac6d70e67da8e4224a065cec6a93541bb7aebe1b198a61b58c7b702 \
|
||||
--hash=sha256:46bf43160c1a35f7ec506d254e5c890f3c03648a4dbac12d624e4490a7046cd1 \
|
||||
--hash=sha256:4ceb10419a9adf4460ea14cfd6bc43d08701f0835e979bf821052f1805850fe8 \
|
||||
--hash=sha256:51392eae71afec0d0c8fb1a53b204dbb3bcabcb3c9b807eedf3e1e6ccf2de903 \
|
||||
--hash=sha256:5da5719280082ac6bd9aa7becb3938dc9f9cbd57fac7d2871717b1feb0902ab6 \
|
||||
--hash=sha256:610faea79c43e44c71e1ec53a554553fa22321b65fae24889706c0a84d4ad86d \
|
||||
--hash=sha256:636062ea65bd0195bc012fea9321aca499c0504409f413dc88af450b57ffd03b \
|
||||
--hash=sha256:6883e737d7d9e4899a8a695e00ec36bd4e5e4f18fabe0aca0efe0a4b44cdb13e \
|
||||
--hash=sha256:6b8b4a92e1c65048ff98cfe1f735ef8f1ceb72e3d5f0c25fdb12087a23da22be \
|
||||
--hash=sha256:6f17be4345073b0a7b8ea599688f692ac3ef23ce28e5df79c04de519dbc4912c \
|
||||
--hash=sha256:706510fe141c86a69c8ddc029c7910003a17353970cff3b904ff0686a5927683 \
|
||||
--hash=sha256:72e72408cad3d5419375fc87d289076ee319835bdfa2caad331e377589aebba9 \
|
||||
--hash=sha256:733e99bc2df47476e3848417c5a4540522f234dfd4ef3ab7fafdf555b082ec0c \
|
||||
--hash=sha256:7596d6620d3fa590f677e9ee430df2958d2d6d6de2feeae5b20e82c00b76fbf8 \
|
||||
--hash=sha256:78122be759c3f8a014ce010908ae03364d00a1f81ab5c7f4a7a5120607ea56e1 \
|
||||
--hash=sha256:805b4371bf7197c329fcb3ead37e710d1bca9da5d583f5073b799d5c5bd1eee4 \
|
||||
--hash=sha256:85a950a4ac9c359340d5963966e3e0a94a676bd6245a4b55bc43949eee26a655 \
|
||||
--hash=sha256:8f2cdc858323644ab277e9bb925ad72ae0e67f69e804f4898c070998d50b1a67 \
|
||||
--hash=sha256:9755e4345d1ec879e3849e62222a18c7174d65a6a92d5b346b1863912168b595 \
|
||||
--hash=sha256:98e3969bcff97cae1b2def8ba499ea3d6f31ddfdb7635374834cf89a1a08ecf0 \
|
||||
--hash=sha256:a08d7e755f8ed21095a310a693525137cfe756ce62d066e53f502a83dc550f65 \
|
||||
--hash=sha256:a1ed2dd2972641495a3ec98445e09766f077aee98a1c896dcb4ad0d303628e41 \
|
||||
--hash=sha256:a24ed04c8ffd54b0729c07cee15a81d964e6fee0e3d4d342a27b020d22959dc6 \
|
||||
--hash=sha256:a45e3c6913c5b87b3ff120dcdc03f6131fa0065027d0ed7ee6190736a74cd401 \
|
||||
--hash=sha256:a9b15d491f3ad5d692e11f6b71f7857e7835eb677955c00cc0aefcd0669adaf6 \
|
||||
--hash=sha256:ad9413ccdeda48c5afdae7e4fa2192157e991ff761e7ab8fdd8926f40b160cc3 \
|
||||
--hash=sha256:b2ab587605f4ba0bf81dc0cb08a41bd1c0a5906bd59243d56bad7668a6fc6c16 \
|
||||
--hash=sha256:b62ce867176a75d03a665bad002af8e6d54644fad99a3c70905c543130e39d93 \
|
||||
--hash=sha256:c03e868a0b3bc35839ba98e74211ed2b05d2119be4e8a0f224fba9384f1fe02e \
|
||||
--hash=sha256:c59d6e989d07460165cc5ad3c61f9fd8f1b4796eacbd81cee78957842b834af4 \
|
||||
--hash=sha256:c7eac2ef9b63c79431bc4b25f1cd649d7f061a28808cbc6c47b534bd789ef964 \
|
||||
--hash=sha256:c9c3d058ebabb74db66e431095118094d06abf53284d9c81f27300d0e0d8bc7c \
|
||||
--hash=sha256:ca74b8dbe6e8e8263c0ffd60277de77dcee6c837a3d0881d8c1ead7268c9e576 \
|
||||
--hash=sha256:caaf0640ef5f5517f49bc275eca1406b0ffa6aa184892812030f04c2abf589a0 \
|
||||
--hash=sha256:cdf5ce3acdfd1661132f2a9c19cac174758dc2352bfe37d98aa7512c6b7178b3 \
|
||||
--hash=sha256:d016c76bdd850f3c626af19b0542c9677ba156e4ee4fccfdd7848803533ef662 \
|
||||
--hash=sha256:d01b12eeeb4427d3110de311e1774046ad344f5b1a7403101878976ecd7a10f3 \
|
||||
--hash=sha256:d63afe322132c194cf832bfec0dc69a99fb9bb6bbd550f161a49e9e855cc78ff \
|
||||
--hash=sha256:da95af8214998d77a98cc14e3a3bd00aa191526343078b530ceb0bd710fb48a5 \
|
||||
--hash=sha256:dd398dbc6773384a17fe0d3e7eeb8d1a21c2200473ee6806bb5e6a8e62bb73dd \
|
||||
--hash=sha256:de2ea4b5833625383e464549fec1bc395c1bdeeb5f25c4a3a82b5a8c756ec22f \
|
||||
--hash=sha256:de55b766c7aa2e2a3092c51e0483d700341182f08e67c63630d5b6f200bb28e5 \
|
||||
--hash=sha256:df8b1c11f177bc2313ec4b2d46baec87a5f3e71fc8b45dab2ee7cae86d9aba14 \
|
||||
--hash=sha256:e03eab0a8677fa80d646b5ddece1cbeaf556c313dcfac435ba11f107ba117b5d \
|
||||
--hash=sha256:e221cf152cff04059d011ee126477f0d9588303eb57e88923578ace7baad17f9 \
|
||||
--hash=sha256:e31ae45bc2e29f6b2abd0de1cc3b9d5205aa847cafaecb8af1476a609a2f6eb7 \
|
||||
--hash=sha256:edae79245293e15384b51f88b00613ba9f7198016a5948b5dddf4917d4d26382 \
|
||||
--hash=sha256:f1e22e8c4419538cb197e4dd60acc919d7696e5ef98ee4da4e01d3f8cfa4cc5a \
|
||||
--hash=sha256:f3a2b4222ce6b60e2e8b337bb9596923045681d71e5a082783484d845390938e \
|
||||
--hash=sha256:f6a16c31041f09ead72d69f583767292f750d24913dadacf5756b966aacb3f1a \
|
||||
--hash=sha256:f75c7ab1f9e4aca5414ed4d8e5c0e303a34f4421f8a0d47a4d019ceff0ab6af4 \
|
||||
--hash=sha256:f79fc4fc25f1c8698ff97788206bb3c2598949bfe0fef03d299eb1b5356ada99 \
|
||||
--hash=sha256:f7f5baafcc48261359e14bcd6d9bff6d4b28d9103847c9e136694cb0501aef87 \
|
||||
--hash=sha256:fc48c783f9c87e60831201f2cce7f3b2e4846bf4d8728eabe54d60700b318a0b
|
||||
cffi==2.0.0 \
|
||||
--hash=sha256:00bdf7acc5f795150faa6957054fbbca2439db2f775ce831222b66f192f03beb \
|
||||
--hash=sha256:07b271772c100085dd28b74fa0cd81c8fb1a3ba18b21e03d7c27f3436a10606b \
|
||||
--hash=sha256:087067fa8953339c723661eda6b54bc98c5625757ea62e95eb4898ad5e776e9f \
|
||||
--hash=sha256:0a1527a803f0a659de1af2e1fd700213caba79377e27e4693648c2923da066f9 \
|
||||
--hash=sha256:0cf2d91ecc3fcc0625c2c530fe004f82c110405f101548512cce44322fa8ac44 \
|
||||
--hash=sha256:0f6084a0ea23d05d20c3edcda20c3d006f9b6f3fefeac38f59262e10cef47ee2 \
|
||||
--hash=sha256:12873ca6cb9b0f0d3a0da705d6086fe911591737a59f28b7936bdfed27c0d47c \
|
||||
--hash=sha256:19f705ada2530c1167abacb171925dd886168931e0a7b78f5bffcae5c6b5be75 \
|
||||
--hash=sha256:1cd13c99ce269b3ed80b417dcd591415d3372bcac067009b6e0f59c7d4015e65 \
|
||||
--hash=sha256:1e3a615586f05fc4065a8b22b8152f0c1b00cdbc60596d187c2a74f9e3036e4e \
|
||||
--hash=sha256:1f72fb8906754ac8a2cc3f9f5aaa298070652a0ffae577e0ea9bd480dc3c931a \
|
||||
--hash=sha256:1fc9ea04857caf665289b7a75923f2c6ed559b8298a1b8c49e59f7dd95c8481e \
|
||||
--hash=sha256:203a48d1fb583fc7d78a4c6655692963b860a417c0528492a6bc21f1aaefab25 \
|
||||
--hash=sha256:2081580ebb843f759b9f617314a24ed5738c51d2aee65d31e02f6f7a2b97707a \
|
||||
--hash=sha256:21d1152871b019407d8ac3985f6775c079416c282e431a4da6afe7aefd2bccbe \
|
||||
--hash=sha256:24b6f81f1983e6df8db3adc38562c83f7d4a0c36162885ec7f7b77c7dcbec97b \
|
||||
--hash=sha256:256f80b80ca3853f90c21b23ee78cd008713787b1b1e93eae9f3d6a7134abd91 \
|
||||
--hash=sha256:28a3a209b96630bca57cce802da70c266eb08c6e97e5afd61a75611ee6c64592 \
|
||||
--hash=sha256:2c8f814d84194c9ea681642fd164267891702542f028a15fc97d4674b6206187 \
|
||||
--hash=sha256:2de9a304e27f7596cd03d16f1b7c72219bd944e99cc52b84d0145aefb07cbd3c \
|
||||
--hash=sha256:38100abb9d1b1435bc4cc340bb4489635dc2f0da7456590877030c9b3d40b0c1 \
|
||||
--hash=sha256:3925dd22fa2b7699ed2617149842d2e6adde22b262fcbfada50e3d195e4b3a94 \
|
||||
--hash=sha256:3e17ed538242334bf70832644a32a7aae3d83b57567f9fd60a26257e992b79ba \
|
||||
--hash=sha256:3e837e369566884707ddaf85fc1744b47575005c0a229de3327f8f9a20f4efeb \
|
||||
--hash=sha256:3f4d46d8b35698056ec29bca21546e1551a205058ae1a181d871e278b0b28165 \
|
||||
--hash=sha256:44d1b5909021139fe36001ae048dbdde8214afa20200eda0f64c068cac5d5529 \
|
||||
--hash=sha256:45d5e886156860dc35862657e1494b9bae8dfa63bf56796f2fb56e1679fc0bca \
|
||||
--hash=sha256:4647afc2f90d1ddd33441e5b0e85b16b12ddec4fca55f0d9671fef036ecca27c \
|
||||
--hash=sha256:4671d9dd5ec934cb9a73e7ee9676f9362aba54f7f34910956b84d727b0d73fb6 \
|
||||
--hash=sha256:53f77cbe57044e88bbd5ed26ac1d0514d2acf0591dd6bb02a3ae37f76811b80c \
|
||||
--hash=sha256:5eda85d6d1879e692d546a078b44251cdd08dd1cfb98dfb77b670c97cee49ea0 \
|
||||
--hash=sha256:5fed36fccc0612a53f1d4d9a816b50a36702c28a2aa880cb8a122b3466638743 \
|
||||
--hash=sha256:61d028e90346df14fedc3d1e5441df818d095f3b87d286825dfcbd6459b7ef63 \
|
||||
--hash=sha256:66f011380d0e49ed280c789fbd08ff0d40968ee7b665575489afa95c98196ab5 \
|
||||
--hash=sha256:6824f87845e3396029f3820c206e459ccc91760e8fa24422f8b0c3d1731cbec5 \
|
||||
--hash=sha256:6c6c373cfc5c83a975506110d17457138c8c63016b563cc9ed6e056a82f13ce4 \
|
||||
--hash=sha256:6d02d6655b0e54f54c4ef0b94eb6be0607b70853c45ce98bd278dc7de718be5d \
|
||||
--hash=sha256:6d50360be4546678fc1b79ffe7a66265e28667840010348dd69a314145807a1b \
|
||||
--hash=sha256:730cacb21e1bdff3ce90babf007d0a0917cc3e6492f336c2f0134101e0944f93 \
|
||||
--hash=sha256:737fe7d37e1a1bffe70bd5754ea763a62a066dc5913ca57e957824b72a85e205 \
|
||||
--hash=sha256:74a03b9698e198d47562765773b4a8309919089150a0bb17d829ad7b44b60d27 \
|
||||
--hash=sha256:7553fb2090d71822f02c629afe6042c299edf91ba1bf94951165613553984512 \
|
||||
--hash=sha256:7a66c7204d8869299919db4d5069a82f1561581af12b11b3c9f48c584eb8743d \
|
||||
--hash=sha256:7cc09976e8b56f8cebd752f7113ad07752461f48a58cbba644139015ac24954c \
|
||||
--hash=sha256:81afed14892743bbe14dacb9e36d9e0e504cd204e0b165062c488942b9718037 \
|
||||
--hash=sha256:8941aaadaf67246224cee8c3803777eed332a19d909b47e29c9842ef1e79ac26 \
|
||||
--hash=sha256:89472c9762729b5ae1ad974b777416bfda4ac5642423fa93bd57a09204712322 \
|
||||
--hash=sha256:8ea985900c5c95ce9db1745f7933eeef5d314f0565b27625d9a10ec9881e1bfb \
|
||||
--hash=sha256:8eca2a813c1cb7ad4fb74d368c2ffbbb4789d377ee5bb8df98373c2cc0dee76c \
|
||||
--hash=sha256:92b68146a71df78564e4ef48af17551a5ddd142e5190cdf2c5624d0c3ff5b2e8 \
|
||||
--hash=sha256:9332088d75dc3241c702d852d4671613136d90fa6881da7d770a483fd05248b4 \
|
||||
--hash=sha256:94698a9c5f91f9d138526b48fe26a199609544591f859c870d477351dc7b2414 \
|
||||
--hash=sha256:9a67fc9e8eb39039280526379fb3a70023d77caec1852002b4da7e8b270c4dd9 \
|
||||
--hash=sha256:9de40a7b0323d889cf8d23d1ef214f565ab154443c42737dfe52ff82cf857664 \
|
||||
--hash=sha256:a05d0c237b3349096d3981b727493e22147f934b20f6f125a3eba8f994bec4a9 \
|
||||
--hash=sha256:afb8db5439b81cf9c9d0c80404b60c3cc9c3add93e114dcae767f1477cb53775 \
|
||||
--hash=sha256:b18a3ed7d5b3bd8d9ef7a8cb226502c6bf8308df1525e1cc676c3680e7176739 \
|
||||
--hash=sha256:b1e74d11748e7e98e2f426ab176d4ed720a64412b6a15054378afdb71e0f37dc \
|
||||
--hash=sha256:b21e08af67b8a103c71a250401c78d5e0893beff75e28c53c98f4de42f774062 \
|
||||
--hash=sha256:b4c854ef3adc177950a8dfc81a86f5115d2abd545751a304c5bcf2c2c7283cfe \
|
||||
--hash=sha256:b882b3df248017dba09d6b16defe9b5c407fe32fc7c65a9c69798e6175601be9 \
|
||||
--hash=sha256:baf5215e0ab74c16e2dd324e8ec067ef59e41125d3eade2b863d294fd5035c92 \
|
||||
--hash=sha256:c649e3a33450ec82378822b3dad03cc228b8f5963c0c12fc3b1e0ab940f768a5 \
|
||||
--hash=sha256:c654de545946e0db659b3400168c9ad31b5d29593291482c43e3564effbcee13 \
|
||||
--hash=sha256:c6638687455baf640e37344fe26d37c404db8b80d037c3d29f58fe8d1c3b194d \
|
||||
--hash=sha256:c8d3b5532fc71b7a77c09192b4a5a200ea992702734a2e9279a37f2478236f26 \
|
||||
--hash=sha256:cb527a79772e5ef98fb1d700678fe031e353e765d1ca2d409c92263c6d43e09f \
|
||||
--hash=sha256:cf364028c016c03078a23b503f02058f1814320a56ad535686f90565636a9495 \
|
||||
--hash=sha256:d48a880098c96020b02d5a1f7d9251308510ce8858940e6fa99ece33f610838b \
|
||||
--hash=sha256:d68b6cef7827e8641e8ef16f4494edda8b36104d79773a334beaa1e3521430f6 \
|
||||
--hash=sha256:d9b29c1f0ae438d5ee9acb31cadee00a58c46cc9c0b2f9038c6b0b3470877a8c \
|
||||
--hash=sha256:d9b97165e8aed9272a6bb17c01e3cc5871a594a446ebedc996e2397a1c1ea8ef \
|
||||
--hash=sha256:da68248800ad6320861f129cd9c1bf96ca849a2771a59e0344e88681905916f5 \
|
||||
--hash=sha256:da902562c3e9c550df360bfa53c035b2f241fed6d9aef119048073680ace4a18 \
|
||||
--hash=sha256:dbd5c7a25a7cb98f5ca55d258b103a2054f859a46ae11aaf23134f9cc0d356ad \
|
||||
--hash=sha256:dd4f05f54a52fb558f1ba9f528228066954fee3ebe629fc1660d874d040ae5a3 \
|
||||
--hash=sha256:de8dad4425a6ca6e4e5e297b27b5c824ecc7581910bf9aee86cb6835e6812aa7 \
|
||||
--hash=sha256:e11e82b744887154b182fd3e7e8512418446501191994dbf9c9fc1f32cc8efd5 \
|
||||
--hash=sha256:e6e73b9e02893c764e7e8d5bb5ce277f1a009cd5243f8228f75f842bf937c534 \
|
||||
--hash=sha256:f73b96c41e3b2adedc34a7356e64c8eb96e03a3782b535e043a986276ce12a49 \
|
||||
--hash=sha256:f93fd8e5c8c0a4aa1f424d6173f14a892044054871c771f8566e4008eaa359d2 \
|
||||
--hash=sha256:fc33c5141b55ed366cfaad382df24fe7dcbc686de5be719b207bb248e3053dc5 \
|
||||
--hash=sha256:fc7de24befaeae77ba923797c7c87834c73648a05a4bde34b3b7e5588973a453 \
|
||||
--hash=sha256:fe562eb1a64e67dd297ccc4f5addea2501664954f2692b69a76449ec7913ecbf
|
||||
# via cryptography
|
||||
charset-normalizer==3.4.3 \
|
||||
--hash=sha256:00237675befef519d9af72169d8604a067d92755e84fe76492fef5441db05b91 \
|
||||
|
|
@ -390,9 +407,9 @@ dns-lexicon==3.21.1 \
|
|||
# certbot-dns-nsone
|
||||
# certbot-dns-ovh
|
||||
# certbot-dns-sakuracloud
|
||||
dnspython==2.7.0 \
|
||||
--hash=sha256:b4c34b7d10b51bcc3a5071e7b8dee77939f1e878477eeecc965e9835f63c6c86 \
|
||||
--hash=sha256:ce9c432eda0dc91cf618a5cedf1a4e142651196bbcd2c80e89ed5a907e5cfaf1
|
||||
dnspython==2.8.0 \
|
||||
--hash=sha256:01d9bbc4a2d76bf0db7c1f729812ded6d912bd318d3b1cf81d30c0f845dbf3af \
|
||||
--hash=sha256:181d3c6996452cb1189c4046c61599b84a5a86e099562ffde77d26984ff26d0f
|
||||
# via
|
||||
# certbot-dns-desec
|
||||
# certbot-dns-dynu
|
||||
|
|
@ -426,9 +443,9 @@ googleapis-common-protos==1.70.0 \
|
|||
--hash=sha256:0e1b44e0ea153e6594f9f394fef15193a68aaaea2d843f83e2742717ca753257 \
|
||||
--hash=sha256:b8bfcca8c25a2bb253e0e0b0adaf8c00773e5e6af6fd92397576680b807e0fd8
|
||||
# via google-api-core
|
||||
httplib2==0.30.0 \
|
||||
--hash=sha256:d10443a2bdfe0ea5dbb17e016726146d48b574208dafd41e854cf34e7d78842c \
|
||||
--hash=sha256:d5b23c11fcf8e57e00ff91b7008656af0f6242c8886fd97065c97509e4e548c5
|
||||
httplib2==0.31.0 \
|
||||
--hash=sha256:ac7ab497c50975147d4f7b1ade44becc7df2f8954d42b38b3d69c515f531135c \
|
||||
--hash=sha256:b9cd78abea9b4e43a7714c6e0f8b6b8561a6fc1e95d5dbd367f5bf0ef35f5d24
|
||||
# via
|
||||
# google-api-python-client
|
||||
# google-auth-httplib2
|
||||
|
|
@ -571,16 +588,16 @@ proto-plus==1.26.1 \
|
|||
--hash=sha256:13285478c2dcf2abb829db158e1047e2f1e8d63a077d94263c2b88b043c75a66 \
|
||||
--hash=sha256:21a515a4c4c0088a773899e23c7bbade3d18f9c66c73edd4c7ee3816bc96a012
|
||||
# via google-api-core
|
||||
protobuf==6.32.0 \
|
||||
--hash=sha256:15eba1b86f193a407607112ceb9ea0ba9569aed24f93333fe9a497cf2fda37d3 \
|
||||
--hash=sha256:501fe6372fd1c8ea2a30b4d9be8f87955a64d6be9c88a973996cef5ef6f0abf1 \
|
||||
--hash=sha256:75a2aab2bd1aeb1f5dc7c5f33bcb11d82ea8c055c9becbb41c26a8c43fd7092c \
|
||||
--hash=sha256:7db8ed09024f115ac877a1427557b838705359f047b2ff2f2b2364892d19dacb \
|
||||
--hash=sha256:84f9e3c1ff6fb0308dbacb0950d8aa90694b0d0ee68e75719cb044b7078fe741 \
|
||||
--hash=sha256:a81439049127067fc49ec1d36e25c6ee1d1a2b7be930675f919258d03c04e7d2 \
|
||||
--hash=sha256:a8bdbb2f009cfc22a36d031f22a625a38b615b5e19e558a7b756b3279723e68e \
|
||||
--hash=sha256:ba377e5b67b908c8f3072a57b63e2c6a4cbd18aea4ed98d2584350dbf46f2783 \
|
||||
--hash=sha256:d52691e5bee6c860fff9a1c86ad26a13afbeb4b168cd4445c922b7e2cf85aaf0
|
||||
protobuf==6.32.1 \
|
||||
--hash=sha256:2601b779fc7d32a866c6b4404f9d42a3f67c5b9f3f15b4db3cccabe06b95c346 \
|
||||
--hash=sha256:2f5b80a49e1eb7b86d85fcd23fe92df154b9730a725c3b38c4e43b9d77018bf4 \
|
||||
--hash=sha256:68ff170bac18c8178f130d1ccb94700cf72852298e016a2443bdb9502279e5f1 \
|
||||
--hash=sha256:a8a32a84bc9f2aad712041b8b366190f71dde248926da517bde9e832e4412085 \
|
||||
--hash=sha256:b00a7d8c25fa471f16bc8153d0e53d6c9e827f0953f3c09aaa4331c718cae5e1 \
|
||||
--hash=sha256:b1864818300c297265c83a4982fd3169f97122c299f56a56e2445c3698d34710 \
|
||||
--hash=sha256:d0975d0b2f3e6957111aa3935d08a0eb7e006b1505d825f862a1fffc8348e122 \
|
||||
--hash=sha256:d8c7e6eb619ffdf105ee4ab76af5a68b60a9d0f66da3ea12d1640e6d8dab7281 \
|
||||
--hash=sha256:ee2469e4a021474ab9baafea6cd070e5bf27c7d29433504ddea1a4ee5850f68d
|
||||
# via
|
||||
# google-api-core
|
||||
# googleapis-common-protos
|
||||
|
|
@ -595,13 +612,13 @@ pyasn1-modules==0.4.2 \
|
|||
--hash=sha256:29253a9207ce32b64c3ac6600edc75368f98473906e8fd1043bd6b5b1de2c14a \
|
||||
--hash=sha256:677091de870a80aae844b1ca6134f54652fa2c8c5a52aa396440ac3106e941e6
|
||||
# via google-auth
|
||||
pycparser==2.22 \
|
||||
--hash=sha256:491c8be9c040f5390f5bf44a5b07752bd07f56edf992381b05c701439eec10f6 \
|
||||
--hash=sha256:c3702b6d3dd8c7abc1afa565d7e63d53a1d0bd86cdc24edd75470f4de499cfcc
|
||||
pycparser==2.23 \
|
||||
--hash=sha256:78816d4f24add8f10a06d6f05b4d424ad9e96cfebf68a4ddc99c65c0720d00c2 \
|
||||
--hash=sha256:e5c6e8d3fbad53479cab09ac03729e0a9faf2bee3db8208a550daf5af81a5934
|
||||
# via cffi
|
||||
pydantic==2.11.7 \
|
||||
--hash=sha256:d989c3c6cb79469287b1569f7447a17848c998458d49ebe294e975b9baf0f0db \
|
||||
--hash=sha256:dde5df002701f6de26248661f6835bbe296a47bf73990135c7d07ce741b9623b
|
||||
pydantic==2.11.9 \
|
||||
--hash=sha256:6b8ffda597a14812a7975c90b82a8a2e777d9257aba3453f973acd3c032a18e2 \
|
||||
--hash=sha256:c42dd626f5cfc1c6950ce6205ea58c93efa406da65f479dcb4029d5934857da2
|
||||
# via -r requirements.in
|
||||
pydantic-core==2.33.2 \
|
||||
--hash=sha256:0069c9acc3f3981b9ff4cdfaf088e98d83440a4c7ea1bc07460af3d4dc22e72d \
|
||||
|
|
@ -704,17 +721,17 @@ pydantic-core==2.33.2 \
|
|||
--hash=sha256:fa854f5cf7e33842a892e5c73f45327760bc7bc516339fda888c75ae60edaeb6 \
|
||||
--hash=sha256:fe5b32187cbc0c862ee201ad66c30cf218e5ed468ec8dc1cf49dec66e160cc4d
|
||||
# via pydantic
|
||||
pyopenssl==25.1.0 \
|
||||
--hash=sha256:2b11f239acc47ac2e5aca04fd7fa829800aeee22a2eb30d744572a157bd8a1ab \
|
||||
--hash=sha256:8d031884482e0c67ee92bf9a4d8cceb08d92aba7136432ffb0703c5280fc205b
|
||||
pyopenssl==25.2.0 \
|
||||
--hash=sha256:71db4c6a4badb88b9804ade80886dc19b7eec3678a4a23c9029f97024c65191a \
|
||||
--hash=sha256:8a3d87605158baffa5d35d097539b2221908f7b15c3ee03c99cdd49e79a7d5ae
|
||||
# via acme
|
||||
pyotp==2.9.0 \
|
||||
--hash=sha256:346b6642e0dbdde3b4ff5a930b664ca82abfa116356ed48cc42c7d6590d36f63 \
|
||||
--hash=sha256:81c2e5865b8ac55e825b0358e496e1d9387c811e85bb40e71a3b29b288963612
|
||||
# via dns-lexicon
|
||||
pyparsing==3.2.3 \
|
||||
--hash=sha256:a749938e02d6fd0b59b356ca504a24982314bb090c383e3cf201c95ef7e2bfcf \
|
||||
--hash=sha256:b9c13f1ab8b3b542f72e28f634bad4de758ab3ce4546e4301970ad6fa77c38be
|
||||
pyparsing==3.2.4 \
|
||||
--hash=sha256:91d0fcde680d42cd031daf3a6ba20da3107e08a75de50da58360e7d94ab24d36 \
|
||||
--hash=sha256:fff89494f45559d0f2ce46613b419f632bbb6afbdaed49696d322bcf98a58e99
|
||||
# via httplib2
|
||||
pyrfc3339==2.1.0 \
|
||||
--hash=sha256:560f3f972e339f579513fe1396974352fd575ef27caff160a38b312252fcddf3 \
|
||||
|
|
@ -826,9 +843,9 @@ rsa==4.9.1 \
|
|||
--hash=sha256:68635866661c6836b8d39430f97a996acbd61bfa49406748ea243539fe239762 \
|
||||
--hash=sha256:e7bdbfdb5497da4c07dfd35530e1a902659db6ff241e39d9953cad06ebd0ae75
|
||||
# via google-auth
|
||||
s3transfer==0.13.1 \
|
||||
--hash=sha256:a981aa7429be23fe6dfc13e80e4020057cbab622b08c0315288758d67cabc724 \
|
||||
--hash=sha256:c3fdba22ba1bd367922f27ec8032d6a1cf5f10c934fb5d68cf60fd5a23d936cf
|
||||
s3transfer==0.14.0 \
|
||||
--hash=sha256:ea3b790c7077558ed1f02a3072fb3cb992bbbd253392f4b6e9e8976941c7d456 \
|
||||
--hash=sha256:eff12264e7c8b4985074ccce27a3b38a485bb7f7422cc8046fee9be4983e4125
|
||||
# via boto3
|
||||
schedule==1.2.2 \
|
||||
--hash=sha256:15fe9c75fe5fd9b9627f3f19cc0ef1420508f9f9a46f45cd0769ef75ede5f0b7 \
|
||||
|
|
@ -879,44 +896,38 @@ zipp==3.23.0 \
|
|||
--hash=sha256:071652d6115ed432f5ce1d34c336c0adfd6a884660d1e9712a256d3d3bd4b14e \
|
||||
--hash=sha256:a07157588a12518c9d4034df3fbbee09c814741a33ff63c05fa29d26a2404166
|
||||
# via importlib-metadata
|
||||
zope-interface==7.2 \
|
||||
--hash=sha256:033b3923b63474800b04cba480b70f6e6243a62208071fc148354f3f89cc01b7 \
|
||||
--hash=sha256:05b910a5afe03256b58ab2ba6288960a2892dfeef01336dc4be6f1b9ed02ab0a \
|
||||
--hash=sha256:086ee2f51eaef1e4a52bd7d3111a0404081dadae87f84c0ad4ce2649d4f708b7 \
|
||||
--hash=sha256:0ef9e2f865721553c6f22a9ff97da0f0216c074bd02b25cf0d3af60ea4d6931d \
|
||||
--hash=sha256:1090c60116b3da3bfdd0c03406e2f14a1ff53e5771aebe33fec1edc0a350175d \
|
||||
--hash=sha256:144964649eba4c5e4410bb0ee290d338e78f179cdbfd15813de1a664e7649b3b \
|
||||
--hash=sha256:15398c000c094b8855d7d74f4fdc9e73aa02d4d0d5c775acdef98cdb1119768d \
|
||||
--hash=sha256:1909f52a00c8c3dcab6c4fad5d13de2285a4b3c7be063b239b8dc15ddfb73bd2 \
|
||||
--hash=sha256:21328fcc9d5b80768bf051faa35ab98fb979080c18e6f84ab3f27ce703bce465 \
|
||||
--hash=sha256:224b7b0314f919e751f2bca17d15aad00ddbb1eadf1cb0190fa8175edb7ede62 \
|
||||
--hash=sha256:25e6a61dcb184453bb00eafa733169ab6d903e46f5c2ace4ad275386f9ab327a \
|
||||
--hash=sha256:27f926f0dcb058211a3bb3e0e501c69759613b17a553788b2caeb991bed3b61d \
|
||||
--hash=sha256:29caad142a2355ce7cfea48725aa8bcf0067e2b5cc63fcf5cd9f97ad12d6afb5 \
|
||||
--hash=sha256:2ad9913fd858274db8dd867012ebe544ef18d218f6f7d1e3c3e6d98000f14b75 \
|
||||
--hash=sha256:31d06db13a30303c08d61d5fb32154be51dfcbdb8438d2374ae27b4e069aac40 \
|
||||
--hash=sha256:3e0350b51e88658d5ad126c6a57502b19d5f559f6cb0a628e3dc90442b53dd98 \
|
||||
--hash=sha256:3f6771d1647b1fc543d37640b45c06b34832a943c80d1db214a37c31161a93f1 \
|
||||
--hash=sha256:4893395d5dd2ba655c38ceb13014fd65667740f09fa5bb01caa1e6284e48c0cd \
|
||||
--hash=sha256:52e446f9955195440e787596dccd1411f543743c359eeb26e9b2c02b077b0519 \
|
||||
--hash=sha256:550f1c6588ecc368c9ce13c44a49b8d6b6f3ca7588873c679bd8fd88a1b557b6 \
|
||||
--hash=sha256:72cd1790b48c16db85d51fbbd12d20949d7339ad84fd971427cf00d990c1f137 \
|
||||
--hash=sha256:7bd449c306ba006c65799ea7912adbbfed071089461a19091a228998b82b1fdb \
|
||||
--hash=sha256:7dc5016e0133c1a1ec212fc87a4f7e7e562054549a99c73c8896fa3a9e80cbc7 \
|
||||
--hash=sha256:802176a9f99bd8cc276dcd3b8512808716492f6f557c11196d42e26c01a69a4c \
|
||||
--hash=sha256:80ecf2451596f19fd607bb09953f426588fc1e79e93f5968ecf3367550396b22 \
|
||||
--hash=sha256:8b49f1a3d1ee4cdaf5b32d2e738362c7f5e40ac8b46dd7d1a65e82a4872728fe \
|
||||
--hash=sha256:8e7da17f53e25d1a3bde5da4601e026adc9e8071f9f6f936d0fe3fe84ace6d54 \
|
||||
--hash=sha256:a102424e28c6b47c67923a1f337ede4a4c2bba3965b01cf707978a801fc7442c \
|
||||
--hash=sha256:a19a6cc9c6ce4b1e7e3d319a473cf0ee989cbbe2b39201d7c19e214d2dfb80c7 \
|
||||
--hash=sha256:a71a5b541078d0ebe373a81a3b7e71432c61d12e660f1d67896ca62d9628045b \
|
||||
--hash=sha256:baf95683cde5bc7d0e12d8e7588a3eb754d7c4fa714548adcd96bdf90169f021 \
|
||||
--hash=sha256:cab15ff4832580aa440dc9790b8a6128abd0b88b7ee4dd56abacbc52f212209d \
|
||||
--hash=sha256:ce290e62229964715f1011c3dbeab7a4a1e4971fd6f31324c4519464473ef9f2 \
|
||||
--hash=sha256:d3a8ffec2a50d8ec470143ea3d15c0c52d73df882eef92de7537e8ce13475e8a \
|
||||
--hash=sha256:e204937f67b28d2dca73ca936d3039a144a081fc47a07598d44854ea2a106239 \
|
||||
--hash=sha256:eb23f58a446a7f09db85eda09521a498e109f137b85fb278edb2e34841055398 \
|
||||
--hash=sha256:f6dd02ec01f4468da0f234da9d9c8545c5412fef80bc590cc51d8dd084138a89
|
||||
zope-interface==8.0 \
|
||||
--hash=sha256:07405019f635a93b318807cb2ec7b05a5ef30f67cf913d11eb2f156ddbcead0d \
|
||||
--hash=sha256:0caca2915522451e92c96c2aec404d2687e9c5cb856766940319b3973f62abb8 \
|
||||
--hash=sha256:160ba50022b342451baf516de3e3a2cd2d8c8dbac216803889a5eefa67083688 \
|
||||
--hash=sha256:1858d1e5bb2c5ae766890708184a603eb484bb7454e306e967932a9f3c558b07 \
|
||||
--hash=sha256:1bee9c1b42513148f98d3918affd829804a5c992c000c290dc805f25a75a6a3f \
|
||||
--hash=sha256:450ab3357799eed6093f3a9f1fa22761b3a9de9ebaf57f416da2c9fb7122cdcb \
|
||||
--hash=sha256:453d2c6668778b8d2215430ed61e04417386e51afb23637ef2e14972b047b700 \
|
||||
--hash=sha256:4d639d5015c1753031e180b8ef81e72bb7d47b0aca0218694ad1f19b0a6c6b63 \
|
||||
--hash=sha256:5cffe23eb610e32a83283dde5413ab7a17938fa3fbd023ca3e529d724219deb0 \
|
||||
--hash=sha256:67047a4470cb2fddb5ba5105b0160a1d1c30ce4b300cf264d0563136adac4eac \
|
||||
--hash=sha256:778458ea69413cf8131a3fcc6f0ea2792d07df605422fb03ad87daca3f8f78ce \
|
||||
--hash=sha256:7e88c66ebedd1e839082f308b8372a50ef19423e01ee2e09600b80e765a10234 \
|
||||
--hash=sha256:7fb931bf55c66a092c5fbfb82a0ff3cc3221149b185bde36f0afc48acb8dcd92 \
|
||||
--hash=sha256:804ebacb2776eb89a57d9b5e9abec86930e0ee784a0005030801ae2f6c04d5d8 \
|
||||
--hash=sha256:879bb5bf937cde4acd738264e87f03c7bf7d45478f7c8b9dc417182b13d81f6c \
|
||||
--hash=sha256:a26ae2fe77c58b4df8c39c2b7c3aadedfd44225a1b54a1d74837cd27057b2fc8 \
|
||||
--hash=sha256:a2c107cc6dff954be25399cd81ddc390667f79af306802fc0c1de98614348b70 \
|
||||
--hash=sha256:a9a8a71c38628af82a9ea1f7be58e5d19360a38067080c8896f6cbabe167e4f8 \
|
||||
--hash=sha256:b14d5aac547e635af749ce20bf49a3f5f93b8a854d2a6b1e95d4d5e5dc618f7d \
|
||||
--hash=sha256:b207966f39c2e6fcfe9b68333acb7b19afd3fdda29eccc4643f8d52c180a3185 \
|
||||
--hash=sha256:b80447a3a5c7347f4ebf3e50de319c8d2a5dabd7de32f20899ac50fc275b145d \
|
||||
--hash=sha256:c0cc51ebd984945362fd3abdc1e140dbd837c3e3b680942b3fa24fe3aac26ef8 \
|
||||
--hash=sha256:c23af5b4c4e332253d721ec1222c809ad27ceae382ad5b8ff22c4c4fb6eb8ed5 \
|
||||
--hash=sha256:c4d9d3982aaa88b177812cd911ceaf5ffee4829e86ab3273c89428f2c0c32cc4 \
|
||||
--hash=sha256:daf4d6ba488a0fb560980b575244aa962a75e77b7c86984138b8d52bd4b5465f \
|
||||
--hash=sha256:dee2d1db1067e8a4b682dde7eb4bff21775412358e142f4f98c9066173f9dacd \
|
||||
--hash=sha256:e38bb30a58887d63b80b01115ab5e8be6158b44d00b67197186385ec7efe44c7 \
|
||||
--hash=sha256:e3cf57f90a760c56c55668f650ba20c3444cde8332820db621c9a1aafc217471 \
|
||||
--hash=sha256:ea1f2e47bc0124a03ee1e5fb31aee5dfde876244bcc552b9e3eb20b041b350d7 \
|
||||
--hash=sha256:ec1da7b9156ae000cea2d19bad83ddb5c50252f9d7b186da276d17768c67a3cb \
|
||||
--hash=sha256:ee9ecad04269c2da4b1be403a47993981531ffd557064b870eab4094730e5062
|
||||
# via
|
||||
# certbot-dns-bunny
|
||||
# certbot-dns-desec
|
||||
|
|
|
|||
|
|
@ -92,8 +92,8 @@ RUN echo "Docker" > INTEGRATION && \
|
|||
for dir in $(echo "pro/plugins configs/http configs/stream configs/server-http configs/server-stream configs/default-server-http configs/default-server-stream configs/modsec configs/modsec-crs") ; do mkdir "/data/${dir}" ; done && \
|
||||
chown -R root:ui INTEGRATION /data /var/cache/bunkerweb /var/lib/bunkerweb /var/www/html /etc/bunkerweb /var/tmp/bunkerweb /var/run/bunkerweb /var/log/bunkerweb && \
|
||||
chmod -R 770 /data /var/cache/bunkerweb /var/lib/bunkerweb /var/www/html /etc/bunkerweb /var/tmp/bunkerweb /var/run/bunkerweb /var/log/bunkerweb && \
|
||||
find . \( -path './ui' -o -path './db' -o -path './gen' -o -path './helpers' -o -path './deps' \) -prune -o -type f -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
find ui db gen helpers deps -type f ! -path 'deps/python/bin/*' ! -name '*.py' ! -name '*.pyc' ! -name '*.sh' ! -name '*.so' -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
find . \( -path './ui' -o -path './db' -o -path './gen' -o -path './utils' -o -path './helpers' -o -path './deps' \) -prune -o -type f -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
find ui db gen utils helpers deps -type f ! -path 'deps/python/bin/*' ! -name '*.py' ! -name '*.pyc' ! -name '*.sh' ! -name '*.so' -print0 | xargs -0 -P "$(nproc)" -n 1024 chmod 440 && \
|
||||
chmod 660 INTEGRATION
|
||||
|
||||
LABEL maintainer="Bunkerity <contact@bunkerity.com>"
|
||||
|
|
|
|||
|
|
@ -22,7 +22,7 @@ from app.utils import COLUMNS_PREFERENCES_DEFAULTS
|
|||
|
||||
class UIDatabase(Database):
|
||||
def __init__(self, logger: Logger, sqlalchemy_string: Optional[str] = None, *, pool: Optional[bool] = None, log: bool = True, **kwargs) -> None:
|
||||
super().__init__(logger, sqlalchemy_string, ui=True, pool=pool, log=log, **kwargs)
|
||||
super().__init__(logger, sqlalchemy_string, external=True, pool=pool, log=log, **kwargs)
|
||||
|
||||
def get_ui_user(self, *, username: Optional[str] = None, as_dict: bool = False) -> Optional[Union[UiUsers, dict]]:
|
||||
"""Get ui user. If username is None, return the first admin user."""
|
||||
|
|
|
|||
|
|
@ -1,5 +1,8 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Enforce a restrictive default umask for all operations
|
||||
umask 027
|
||||
|
||||
# Load utility functions from a shared helper script.
|
||||
# shellcheck disable=SC1091
|
||||
. /usr/share/bunkerweb/helpers/utils.sh
|
||||
|
|
|
|||
|
|
@ -235,74 +235,91 @@ certifi==2025.8.3 \
|
|||
--hash=sha256:e564105f78ded564e3ae7c923924435e1daa7463faeab5bb932bc53ffae63407 \
|
||||
--hash=sha256:f6c12493cfb1b06ba2ff328595af9350c65d6644968e5d3a2ffd78699af217a5
|
||||
# via requests
|
||||
cffi==1.17.1 \
|
||||
--hash=sha256:045d61c734659cc045141be4bae381a41d89b741f795af1dd018bfb532fd0df8 \
|
||||
--hash=sha256:0984a4925a435b1da406122d4d7968dd861c1385afe3b45ba82b750f229811e2 \
|
||||
--hash=sha256:0e2b1fac190ae3ebfe37b979cc1ce69c81f4e4fe5746bb401dca63a9062cdaf1 \
|
||||
--hash=sha256:0f048dcf80db46f0098ccac01132761580d28e28bc0f78ae0d58048063317e15 \
|
||||
--hash=sha256:1257bdabf294dceb59f5e70c64a3e2f462c30c7ad68092d01bbbfb1c16b1ba36 \
|
||||
--hash=sha256:1c39c6016c32bc48dd54561950ebd6836e1670f2ae46128f67cf49e789c52824 \
|
||||
--hash=sha256:1d599671f396c4723d016dbddb72fe8e0397082b0a77a4fab8028923bec050e8 \
|
||||
--hash=sha256:28b16024becceed8c6dfbc75629e27788d8a3f9030691a1dbf9821a128b22c36 \
|
||||
--hash=sha256:2bb1a08b8008b281856e5971307cc386a8e9c5b625ac297e853d36da6efe9c17 \
|
||||
--hash=sha256:30c5e0cb5ae493c04c8b42916e52ca38079f1b235c2f8ae5f4527b963c401caf \
|
||||
--hash=sha256:31000ec67d4221a71bd3f67df918b1f88f676f1c3b535a7eb473255fdc0b83fc \
|
||||
--hash=sha256:386c8bf53c502fff58903061338ce4f4950cbdcb23e2902d86c0f722b786bbe3 \
|
||||
--hash=sha256:3edc8d958eb099c634dace3c7e16560ae474aa3803a5df240542b305d14e14ed \
|
||||
--hash=sha256:45398b671ac6d70e67da8e4224a065cec6a93541bb7aebe1b198a61b58c7b702 \
|
||||
--hash=sha256:46bf43160c1a35f7ec506d254e5c890f3c03648a4dbac12d624e4490a7046cd1 \
|
||||
--hash=sha256:4ceb10419a9adf4460ea14cfd6bc43d08701f0835e979bf821052f1805850fe8 \
|
||||
--hash=sha256:51392eae71afec0d0c8fb1a53b204dbb3bcabcb3c9b807eedf3e1e6ccf2de903 \
|
||||
--hash=sha256:5da5719280082ac6bd9aa7becb3938dc9f9cbd57fac7d2871717b1feb0902ab6 \
|
||||
--hash=sha256:610faea79c43e44c71e1ec53a554553fa22321b65fae24889706c0a84d4ad86d \
|
||||
--hash=sha256:636062ea65bd0195bc012fea9321aca499c0504409f413dc88af450b57ffd03b \
|
||||
--hash=sha256:6883e737d7d9e4899a8a695e00ec36bd4e5e4f18fabe0aca0efe0a4b44cdb13e \
|
||||
--hash=sha256:6b8b4a92e1c65048ff98cfe1f735ef8f1ceb72e3d5f0c25fdb12087a23da22be \
|
||||
--hash=sha256:6f17be4345073b0a7b8ea599688f692ac3ef23ce28e5df79c04de519dbc4912c \
|
||||
--hash=sha256:706510fe141c86a69c8ddc029c7910003a17353970cff3b904ff0686a5927683 \
|
||||
--hash=sha256:72e72408cad3d5419375fc87d289076ee319835bdfa2caad331e377589aebba9 \
|
||||
--hash=sha256:733e99bc2df47476e3848417c5a4540522f234dfd4ef3ab7fafdf555b082ec0c \
|
||||
--hash=sha256:7596d6620d3fa590f677e9ee430df2958d2d6d6de2feeae5b20e82c00b76fbf8 \
|
||||
--hash=sha256:78122be759c3f8a014ce010908ae03364d00a1f81ab5c7f4a7a5120607ea56e1 \
|
||||
--hash=sha256:805b4371bf7197c329fcb3ead37e710d1bca9da5d583f5073b799d5c5bd1eee4 \
|
||||
--hash=sha256:85a950a4ac9c359340d5963966e3e0a94a676bd6245a4b55bc43949eee26a655 \
|
||||
--hash=sha256:8f2cdc858323644ab277e9bb925ad72ae0e67f69e804f4898c070998d50b1a67 \
|
||||
--hash=sha256:9755e4345d1ec879e3849e62222a18c7174d65a6a92d5b346b1863912168b595 \
|
||||
--hash=sha256:98e3969bcff97cae1b2def8ba499ea3d6f31ddfdb7635374834cf89a1a08ecf0 \
|
||||
--hash=sha256:a08d7e755f8ed21095a310a693525137cfe756ce62d066e53f502a83dc550f65 \
|
||||
--hash=sha256:a1ed2dd2972641495a3ec98445e09766f077aee98a1c896dcb4ad0d303628e41 \
|
||||
--hash=sha256:a24ed04c8ffd54b0729c07cee15a81d964e6fee0e3d4d342a27b020d22959dc6 \
|
||||
--hash=sha256:a45e3c6913c5b87b3ff120dcdc03f6131fa0065027d0ed7ee6190736a74cd401 \
|
||||
--hash=sha256:a9b15d491f3ad5d692e11f6b71f7857e7835eb677955c00cc0aefcd0669adaf6 \
|
||||
--hash=sha256:ad9413ccdeda48c5afdae7e4fa2192157e991ff761e7ab8fdd8926f40b160cc3 \
|
||||
--hash=sha256:b2ab587605f4ba0bf81dc0cb08a41bd1c0a5906bd59243d56bad7668a6fc6c16 \
|
||||
--hash=sha256:b62ce867176a75d03a665bad002af8e6d54644fad99a3c70905c543130e39d93 \
|
||||
--hash=sha256:c03e868a0b3bc35839ba98e74211ed2b05d2119be4e8a0f224fba9384f1fe02e \
|
||||
--hash=sha256:c59d6e989d07460165cc5ad3c61f9fd8f1b4796eacbd81cee78957842b834af4 \
|
||||
--hash=sha256:c7eac2ef9b63c79431bc4b25f1cd649d7f061a28808cbc6c47b534bd789ef964 \
|
||||
--hash=sha256:c9c3d058ebabb74db66e431095118094d06abf53284d9c81f27300d0e0d8bc7c \
|
||||
--hash=sha256:ca74b8dbe6e8e8263c0ffd60277de77dcee6c837a3d0881d8c1ead7268c9e576 \
|
||||
--hash=sha256:caaf0640ef5f5517f49bc275eca1406b0ffa6aa184892812030f04c2abf589a0 \
|
||||
--hash=sha256:cdf5ce3acdfd1661132f2a9c19cac174758dc2352bfe37d98aa7512c6b7178b3 \
|
||||
--hash=sha256:d016c76bdd850f3c626af19b0542c9677ba156e4ee4fccfdd7848803533ef662 \
|
||||
--hash=sha256:d01b12eeeb4427d3110de311e1774046ad344f5b1a7403101878976ecd7a10f3 \
|
||||
--hash=sha256:d63afe322132c194cf832bfec0dc69a99fb9bb6bbd550f161a49e9e855cc78ff \
|
||||
--hash=sha256:da95af8214998d77a98cc14e3a3bd00aa191526343078b530ceb0bd710fb48a5 \
|
||||
--hash=sha256:dd398dbc6773384a17fe0d3e7eeb8d1a21c2200473ee6806bb5e6a8e62bb73dd \
|
||||
--hash=sha256:de2ea4b5833625383e464549fec1bc395c1bdeeb5f25c4a3a82b5a8c756ec22f \
|
||||
--hash=sha256:de55b766c7aa2e2a3092c51e0483d700341182f08e67c63630d5b6f200bb28e5 \
|
||||
--hash=sha256:df8b1c11f177bc2313ec4b2d46baec87a5f3e71fc8b45dab2ee7cae86d9aba14 \
|
||||
--hash=sha256:e03eab0a8677fa80d646b5ddece1cbeaf556c313dcfac435ba11f107ba117b5d \
|
||||
--hash=sha256:e221cf152cff04059d011ee126477f0d9588303eb57e88923578ace7baad17f9 \
|
||||
--hash=sha256:e31ae45bc2e29f6b2abd0de1cc3b9d5205aa847cafaecb8af1476a609a2f6eb7 \
|
||||
--hash=sha256:edae79245293e15384b51f88b00613ba9f7198016a5948b5dddf4917d4d26382 \
|
||||
--hash=sha256:f1e22e8c4419538cb197e4dd60acc919d7696e5ef98ee4da4e01d3f8cfa4cc5a \
|
||||
--hash=sha256:f3a2b4222ce6b60e2e8b337bb9596923045681d71e5a082783484d845390938e \
|
||||
--hash=sha256:f6a16c31041f09ead72d69f583767292f750d24913dadacf5756b966aacb3f1a \
|
||||
--hash=sha256:f75c7ab1f9e4aca5414ed4d8e5c0e303a34f4421f8a0d47a4d019ceff0ab6af4 \
|
||||
--hash=sha256:f79fc4fc25f1c8698ff97788206bb3c2598949bfe0fef03d299eb1b5356ada99 \
|
||||
--hash=sha256:f7f5baafcc48261359e14bcd6d9bff6d4b28d9103847c9e136694cb0501aef87 \
|
||||
--hash=sha256:fc48c783f9c87e60831201f2cce7f3b2e4846bf4d8728eabe54d60700b318a0b
|
||||
cffi==2.0.0 \
|
||||
--hash=sha256:00bdf7acc5f795150faa6957054fbbca2439db2f775ce831222b66f192f03beb \
|
||||
--hash=sha256:07b271772c100085dd28b74fa0cd81c8fb1a3ba18b21e03d7c27f3436a10606b \
|
||||
--hash=sha256:087067fa8953339c723661eda6b54bc98c5625757ea62e95eb4898ad5e776e9f \
|
||||
--hash=sha256:0a1527a803f0a659de1af2e1fd700213caba79377e27e4693648c2923da066f9 \
|
||||
--hash=sha256:0cf2d91ecc3fcc0625c2c530fe004f82c110405f101548512cce44322fa8ac44 \
|
||||
--hash=sha256:0f6084a0ea23d05d20c3edcda20c3d006f9b6f3fefeac38f59262e10cef47ee2 \
|
||||
--hash=sha256:12873ca6cb9b0f0d3a0da705d6086fe911591737a59f28b7936bdfed27c0d47c \
|
||||
--hash=sha256:19f705ada2530c1167abacb171925dd886168931e0a7b78f5bffcae5c6b5be75 \
|
||||
--hash=sha256:1cd13c99ce269b3ed80b417dcd591415d3372bcac067009b6e0f59c7d4015e65 \
|
||||
--hash=sha256:1e3a615586f05fc4065a8b22b8152f0c1b00cdbc60596d187c2a74f9e3036e4e \
|
||||
--hash=sha256:1f72fb8906754ac8a2cc3f9f5aaa298070652a0ffae577e0ea9bd480dc3c931a \
|
||||
--hash=sha256:1fc9ea04857caf665289b7a75923f2c6ed559b8298a1b8c49e59f7dd95c8481e \
|
||||
--hash=sha256:203a48d1fb583fc7d78a4c6655692963b860a417c0528492a6bc21f1aaefab25 \
|
||||
--hash=sha256:2081580ebb843f759b9f617314a24ed5738c51d2aee65d31e02f6f7a2b97707a \
|
||||
--hash=sha256:21d1152871b019407d8ac3985f6775c079416c282e431a4da6afe7aefd2bccbe \
|
||||
--hash=sha256:24b6f81f1983e6df8db3adc38562c83f7d4a0c36162885ec7f7b77c7dcbec97b \
|
||||
--hash=sha256:256f80b80ca3853f90c21b23ee78cd008713787b1b1e93eae9f3d6a7134abd91 \
|
||||
--hash=sha256:28a3a209b96630bca57cce802da70c266eb08c6e97e5afd61a75611ee6c64592 \
|
||||
--hash=sha256:2c8f814d84194c9ea681642fd164267891702542f028a15fc97d4674b6206187 \
|
||||
--hash=sha256:2de9a304e27f7596cd03d16f1b7c72219bd944e99cc52b84d0145aefb07cbd3c \
|
||||
--hash=sha256:38100abb9d1b1435bc4cc340bb4489635dc2f0da7456590877030c9b3d40b0c1 \
|
||||
--hash=sha256:3925dd22fa2b7699ed2617149842d2e6adde22b262fcbfada50e3d195e4b3a94 \
|
||||
--hash=sha256:3e17ed538242334bf70832644a32a7aae3d83b57567f9fd60a26257e992b79ba \
|
||||
--hash=sha256:3e837e369566884707ddaf85fc1744b47575005c0a229de3327f8f9a20f4efeb \
|
||||
--hash=sha256:3f4d46d8b35698056ec29bca21546e1551a205058ae1a181d871e278b0b28165 \
|
||||
--hash=sha256:44d1b5909021139fe36001ae048dbdde8214afa20200eda0f64c068cac5d5529 \
|
||||
--hash=sha256:45d5e886156860dc35862657e1494b9bae8dfa63bf56796f2fb56e1679fc0bca \
|
||||
--hash=sha256:4647afc2f90d1ddd33441e5b0e85b16b12ddec4fca55f0d9671fef036ecca27c \
|
||||
--hash=sha256:4671d9dd5ec934cb9a73e7ee9676f9362aba54f7f34910956b84d727b0d73fb6 \
|
||||
--hash=sha256:53f77cbe57044e88bbd5ed26ac1d0514d2acf0591dd6bb02a3ae37f76811b80c \
|
||||
--hash=sha256:5eda85d6d1879e692d546a078b44251cdd08dd1cfb98dfb77b670c97cee49ea0 \
|
||||
--hash=sha256:5fed36fccc0612a53f1d4d9a816b50a36702c28a2aa880cb8a122b3466638743 \
|
||||
--hash=sha256:61d028e90346df14fedc3d1e5441df818d095f3b87d286825dfcbd6459b7ef63 \
|
||||
--hash=sha256:66f011380d0e49ed280c789fbd08ff0d40968ee7b665575489afa95c98196ab5 \
|
||||
--hash=sha256:6824f87845e3396029f3820c206e459ccc91760e8fa24422f8b0c3d1731cbec5 \
|
||||
--hash=sha256:6c6c373cfc5c83a975506110d17457138c8c63016b563cc9ed6e056a82f13ce4 \
|
||||
--hash=sha256:6d02d6655b0e54f54c4ef0b94eb6be0607b70853c45ce98bd278dc7de718be5d \
|
||||
--hash=sha256:6d50360be4546678fc1b79ffe7a66265e28667840010348dd69a314145807a1b \
|
||||
--hash=sha256:730cacb21e1bdff3ce90babf007d0a0917cc3e6492f336c2f0134101e0944f93 \
|
||||
--hash=sha256:737fe7d37e1a1bffe70bd5754ea763a62a066dc5913ca57e957824b72a85e205 \
|
||||
--hash=sha256:74a03b9698e198d47562765773b4a8309919089150a0bb17d829ad7b44b60d27 \
|
||||
--hash=sha256:7553fb2090d71822f02c629afe6042c299edf91ba1bf94951165613553984512 \
|
||||
--hash=sha256:7a66c7204d8869299919db4d5069a82f1561581af12b11b3c9f48c584eb8743d \
|
||||
--hash=sha256:7cc09976e8b56f8cebd752f7113ad07752461f48a58cbba644139015ac24954c \
|
||||
--hash=sha256:81afed14892743bbe14dacb9e36d9e0e504cd204e0b165062c488942b9718037 \
|
||||
--hash=sha256:8941aaadaf67246224cee8c3803777eed332a19d909b47e29c9842ef1e79ac26 \
|
||||
--hash=sha256:89472c9762729b5ae1ad974b777416bfda4ac5642423fa93bd57a09204712322 \
|
||||
--hash=sha256:8ea985900c5c95ce9db1745f7933eeef5d314f0565b27625d9a10ec9881e1bfb \
|
||||
--hash=sha256:8eca2a813c1cb7ad4fb74d368c2ffbbb4789d377ee5bb8df98373c2cc0dee76c \
|
||||
--hash=sha256:92b68146a71df78564e4ef48af17551a5ddd142e5190cdf2c5624d0c3ff5b2e8 \
|
||||
--hash=sha256:9332088d75dc3241c702d852d4671613136d90fa6881da7d770a483fd05248b4 \
|
||||
--hash=sha256:94698a9c5f91f9d138526b48fe26a199609544591f859c870d477351dc7b2414 \
|
||||
--hash=sha256:9a67fc9e8eb39039280526379fb3a70023d77caec1852002b4da7e8b270c4dd9 \
|
||||
--hash=sha256:9de40a7b0323d889cf8d23d1ef214f565ab154443c42737dfe52ff82cf857664 \
|
||||
--hash=sha256:a05d0c237b3349096d3981b727493e22147f934b20f6f125a3eba8f994bec4a9 \
|
||||
--hash=sha256:afb8db5439b81cf9c9d0c80404b60c3cc9c3add93e114dcae767f1477cb53775 \
|
||||
--hash=sha256:b18a3ed7d5b3bd8d9ef7a8cb226502c6bf8308df1525e1cc676c3680e7176739 \
|
||||
--hash=sha256:b1e74d11748e7e98e2f426ab176d4ed720a64412b6a15054378afdb71e0f37dc \
|
||||
--hash=sha256:b21e08af67b8a103c71a250401c78d5e0893beff75e28c53c98f4de42f774062 \
|
||||
--hash=sha256:b4c854ef3adc177950a8dfc81a86f5115d2abd545751a304c5bcf2c2c7283cfe \
|
||||
--hash=sha256:b882b3df248017dba09d6b16defe9b5c407fe32fc7c65a9c69798e6175601be9 \
|
||||
--hash=sha256:baf5215e0ab74c16e2dd324e8ec067ef59e41125d3eade2b863d294fd5035c92 \
|
||||
--hash=sha256:c649e3a33450ec82378822b3dad03cc228b8f5963c0c12fc3b1e0ab940f768a5 \
|
||||
--hash=sha256:c654de545946e0db659b3400168c9ad31b5d29593291482c43e3564effbcee13 \
|
||||
--hash=sha256:c6638687455baf640e37344fe26d37c404db8b80d037c3d29f58fe8d1c3b194d \
|
||||
--hash=sha256:c8d3b5532fc71b7a77c09192b4a5a200ea992702734a2e9279a37f2478236f26 \
|
||||
--hash=sha256:cb527a79772e5ef98fb1d700678fe031e353e765d1ca2d409c92263c6d43e09f \
|
||||
--hash=sha256:cf364028c016c03078a23b503f02058f1814320a56ad535686f90565636a9495 \
|
||||
--hash=sha256:d48a880098c96020b02d5a1f7d9251308510ce8858940e6fa99ece33f610838b \
|
||||
--hash=sha256:d68b6cef7827e8641e8ef16f4494edda8b36104d79773a334beaa1e3521430f6 \
|
||||
--hash=sha256:d9b29c1f0ae438d5ee9acb31cadee00a58c46cc9c0b2f9038c6b0b3470877a8c \
|
||||
--hash=sha256:d9b97165e8aed9272a6bb17c01e3cc5871a594a446ebedc996e2397a1c1ea8ef \
|
||||
--hash=sha256:da68248800ad6320861f129cd9c1bf96ca849a2771a59e0344e88681905916f5 \
|
||||
--hash=sha256:da902562c3e9c550df360bfa53c035b2f241fed6d9aef119048073680ace4a18 \
|
||||
--hash=sha256:dbd5c7a25a7cb98f5ca55d258b103a2054f859a46ae11aaf23134f9cc0d356ad \
|
||||
--hash=sha256:dd4f05f54a52fb558f1ba9f528228066954fee3ebe629fc1660d874d040ae5a3 \
|
||||
--hash=sha256:de8dad4425a6ca6e4e5e297b27b5c824ecc7581910bf9aee86cb6835e6812aa7 \
|
||||
--hash=sha256:e11e82b744887154b182fd3e7e8512418446501191994dbf9c9fc1f32cc8efd5 \
|
||||
--hash=sha256:e6e73b9e02893c764e7e8d5bb5ce277f1a009cd5243f8228f75f842bf937c534 \
|
||||
--hash=sha256:f73b96c41e3b2adedc34a7356e64c8eb96e03a3782b535e043a986276ce12a49 \
|
||||
--hash=sha256:f93fd8e5c8c0a4aa1f424d6173f14a892044054871c771f8566e4008eaa359d2 \
|
||||
--hash=sha256:fc33c5141b55ed366cfaad382df24fe7dcbc686de5be719b207bb248e3053dc5 \
|
||||
--hash=sha256:fc7de24befaeae77ba923797c7c87834c73648a05a4bde34b3b7e5588973a453 \
|
||||
--hash=sha256:fe562eb1a64e67dd297ccc4f5addea2501664954f2692b69a76449ec7913ecbf
|
||||
# via cryptography
|
||||
charset-normalizer==3.4.3 \
|
||||
--hash=sha256:00237675befef519d9af72169d8604a067d92755e84fe76492fef5441db05b91 \
|
||||
|
|
@ -723,13 +740,13 @@ psutil==7.0.0 \
|
|||
--hash=sha256:a5f098451abc2828f7dc6b58d44b532b22f2088f4999a937557b603ce72b1993 \
|
||||
--hash=sha256:ba3fcef7523064a6c9da440fc4d6bd07da93ac726b5733c29027d7dc95b39d99
|
||||
# via -r requirements.in
|
||||
pycparser==2.22 \
|
||||
--hash=sha256:491c8be9c040f5390f5bf44a5b07752bd07f56edf992381b05c701439eec10f6 \
|
||||
--hash=sha256:c3702b6d3dd8c7abc1afa565d7e63d53a1d0bd86cdc24edd75470f4de499cfcc
|
||||
pycparser==2.23 \
|
||||
--hash=sha256:78816d4f24add8f10a06d6f05b4d424ad9e96cfebf68a4ddc99c65c0720d00c2 \
|
||||
--hash=sha256:e5c6e8d3fbad53479cab09ac03729e0a9faf2bee3db8208a550daf5af81a5934
|
||||
# via cffi
|
||||
pyopenssl==25.1.0 \
|
||||
--hash=sha256:2b11f239acc47ac2e5aca04fd7fa829800aeee22a2eb30d744572a157bd8a1ab \
|
||||
--hash=sha256:8d031884482e0c67ee92bf9a4d8cceb08d92aba7136432ffb0703c5280fc205b
|
||||
pyopenssl==25.2.0 \
|
||||
--hash=sha256:71db4c6a4badb88b9804ade80886dc19b7eec3678a4a23c9029f97024c65191a \
|
||||
--hash=sha256:8a3d87605158baffa5d35d097539b2221908f7b15c3ee03c99cdd49e79a7d5ae
|
||||
# via acme
|
||||
pyrfc3339==2.1.0 \
|
||||
--hash=sha256:560f3f972e339f579513fe1396974352fd575ef27caff160a38b312252fcddf3 \
|
||||
|
|
|
|||
|
|
@ -50,9 +50,9 @@ BISCUIT_PRIVATE_KEY_HASH_FILE = BISCUIT_PRIVATE_KEY_FILE.with_suffix(".hash") #
|
|||
|
||||
MAX_WORKERS = int(getenv("MAX_WORKERS", max((cpu_count() or 1) - 1, 1)))
|
||||
LOG_LEVEL = getenv("CUSTOM_LOG_LEVEL", getenv("LOG_LEVEL", "info"))
|
||||
LISTEN_ADDR = getenv("LISTEN_ADDR", "0.0.0.0")
|
||||
LISTEN_PORT = getenv("LISTEN_PORT", "7000")
|
||||
FORWARDED_ALLOW_IPS = getenv("FORWARDED_ALLOW_IPS", "*")
|
||||
LISTEN_ADDR = getenv("UI_LISTEN_ADDR", getenv("LISTEN_ADDR", "0.0.0.0"))
|
||||
LISTEN_PORT = getenv("UI_LISTEN_PORT", getenv("LISTEN_PORT", "7000"))
|
||||
FORWARDED_ALLOW_IPS = getenv("UI_FORWARDED_ALLOW_IPS", getenv("FORWARDED_ALLOW_IPS", "*"))
|
||||
CAPTURE_OUTPUT = getenv("CAPTURE_OUTPUT", "no").lower() == "yes"
|
||||
|
||||
wsgi_app = "main:app"
|
||||
|
|
|
|||
|
|
@ -19,9 +19,9 @@ HEALTH_FILE = TMP_DIR.joinpath("tmp-ui.healthy")
|
|||
PID_FILE = RUN_DIR.joinpath("tmp-ui.pid")
|
||||
|
||||
LOG_LEVEL = getenv("CUSTOM_LOG_LEVEL", getenv("LOG_LEVEL", "info"))
|
||||
LISTEN_ADDR = getenv("LISTEN_ADDR", "0.0.0.0")
|
||||
LISTEN_PORT = getenv("LISTEN_PORT", "7000")
|
||||
FORWARDED_ALLOW_IPS = getenv("FORWARDED_ALLOW_IPS", "*")
|
||||
LISTEN_ADDR = getenv("UI_LISTEN_ADDR", getenv("LISTEN_ADDR", "0.0.0.0"))
|
||||
LISTEN_PORT = getenv("UI_LISTEN_PORT", getenv("LISTEN_PORT", "7000"))
|
||||
FORWARDED_ALLOW_IPS = getenv("UI_FORWARDED_ALLOW_IPS", getenv("FORWARDED_ALLOW_IPS", "*"))
|
||||
CAPTURE_OUTPUT = getenv("CAPTURE_OUTPUT", "no").lower() == "yes"
|
||||
DEBUG = getenv("DEBUG", False)
|
||||
|
||||
|
|
|
|||
Loading…
Reference in a new issue