feat: isolate dev environment for multi-agent worktree support (#1994)

## Summary
- Isolate dev, E2E, and integration test environments so multiple git worktrees can run all three simultaneously without port conflicts
- Each worktree gets a deterministic slot (0-99) with unique port ranges: dev (30100-31199), E2E (20320-21399), CI integration (14320-40098)
- Dev portal dashboard (http://localhost:9900) auto-discovers all running stacks, streams logs, and provides a History tab for past run logs

## Port Isolation

| Environment | Port Range | Project Name |
|---|---|---|
| Dev stack | 30100-31199 | `hdx-dev-<slot>` |
| E2E tests | 20320-21399 | `e2e-<slot>` |
| CI integration | 14320-40098 | `int-<slot>` |

All three can run simultaneously from the same worktree with zero port conflicts.

## Dev Portal Features

**Live tab:**
- Auto-discovers dev, E2E, and integration Docker containers + local services (API, App)
- Groups all environments for the same worktree into a single card
- SSE log streaming with ANSI color rendering, capped at 5000 lines
- Auto-starts in background from `make dev`, `make dev-e2e`, `make dev-int`

**History tab:**
- Logs archived to `~/.config/hyperdx/dev-slots/<slot>/history/` on exit (instead of deleted)
- Each archived run includes `meta.json` with worktree/branch metadata
- Grouped by worktree with collapsible cards, search by worktree/branch
- View any past log file in the same log panel, delete individual runs or clear all
- Custom dark-themed confirm modal (no native browser dialogs)

## What Changed

- **`scripts/dev-env.sh`** — Slot-based port assignments, portal auto-start, log archival on exit
- **`scripts/test-e2e.sh`** — E2E port range (20320-21399), log capture via `tee`, portal auto-start, log archival
- **`scripts/ensure-dev-portal.sh`** — Shared singleton portal launcher (works sourced or executed)
- **`scripts/dev-portal/server.js`** — Discovery for dev/E2E/CI containers, history API (list/read/delete), local service port probing
- **`scripts/dev-portal/index.html`** — Live/History tabs, worktree-grouped cards, search, collapse/expand, custom confirm modal, ANSI color log rendering
- **`docker-compose.dev.yml`** — Parameterized ports/volumes/project name with `hdx.dev.*` labels
- **`packages/app/tests/e2e/docker-compose.yml`** — Updated to new E2E port defaults
- **`Makefile`** — `dev-int`/`dev-e2e` targets with log capture + portal auto-start; `dev-portal-stop`; `dev-clean` stops everything + wipes slot data
- **`.env` files** — Ports use `${VAR:-default}` syntax across dev, E2E, and CI environments
- **`agent_docs/development.md`** — Full documentation for isolation, port tables, E2E/CI port ranges

## How to Use

```bash
# Start dev stack (auto-starts portal)
make dev

# Run E2E tests (auto-starts portal, separate ports)
make dev-e2e FILE=navigation

# Run integration tests (auto-starts portal, separate ports)
make dev-int FILE=alerts

# All three can run simultaneously from the same worktree
# Portal at http://localhost:9900 shows everything

# Stop portal
make dev-portal-stop

# Clean up everything (all stacks + portal + history)
make dev-clean
```

## Dev Portal

<img width="1692" height="944" alt="image" src="https://github.com/user-attachments/assets/6ed388a3-43bc-4552-aa8d-688077b79fb7" />

<img width="1689" height="935" alt="image" src="https://github.com/user-attachments/assets/8677a138-0a40-4746-93ed-3b355c8bd45e" />

## Test Plan
- [x] Run `make dev` — verify services start with slot-assigned ports
- [x] Run `make dev` in a second worktree — verify different ports, no conflicts
- [x] Run `make dev-e2e` and `make dev-int` simultaneously — no port conflicts
- [x] Open http://localhost:9900 — verify all stacks grouped by worktree
- [x] Click a service to view logs — verify ANSI colors render correctly
- [x] Stop a stack — verify logs archived to History tab with correct worktree
- [x] History tab — search, collapse/expand, view archived logs, delete
- [x] `make dev-clean` — stops everything, wipes slot data and history
This commit is contained in:
Warren Lee 2026-03-31 11:24:24 -07:00 committed by GitHub
parent 9852e9b0b7
commit 6e8ddd3736
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
26 changed files with 3155 additions and 146 deletions

24
.env
View file

@ -14,13 +14,29 @@ IMAGE_VERSION=2
IMAGE_NIGHTLY_TAG=2-nightly
IMAGE_LATEST_TAG=latest
# Set up domain URLs
HYPERDX_API_PORT=8000 #optional (should not be taken by other services)
HYPERDX_APP_PORT=8080
# ---------------------------------------------------------------------------
# Dev environment ports
# ---------------------------------------------------------------------------
# When using worktree isolation (source scripts/dev-env.sh), these are
# overridden with slot-derived values. The defaults below match the original
# hardcoded ports so that existing workflows (yarn dev without the isolation
# helper) still work.
HYPERDX_API_PORT=${HYPERDX_API_PORT:-8000}
HYPERDX_APP_PORT=${HYPERDX_APP_PORT:-8080}
HYPERDX_APP_URL=http://localhost
HYPERDX_LOG_LEVEL=debug
HYPERDX_OPAMP_PORT=4320
HYPERDX_OPAMP_PORT=${HYPERDX_OPAMP_PORT:-4320}
HYPERDX_BASE_PATH=
# Docker service ports (overridden by scripts/dev-env.sh for isolation)
HDX_DEV_MONGO_PORT=${HDX_DEV_MONGO_PORT:-27017}
HDX_DEV_CH_HTTP_PORT=${HDX_DEV_CH_HTTP_PORT:-8123}
HDX_DEV_CH_NATIVE_PORT=${HDX_DEV_CH_NATIVE_PORT:-9000}
HDX_DEV_OTEL_HEALTH_PORT=${HDX_DEV_OTEL_HEALTH_PORT:-13133}
HDX_DEV_OTEL_GRPC_PORT=${HDX_DEV_OTEL_GRPC_PORT:-4317}
HDX_DEV_OTEL_HTTP_PORT=${HDX_DEV_OTEL_HTTP_PORT:-4318}
HDX_DEV_OTEL_METRICS_PORT=${HDX_DEV_OTEL_METRICS_PORT:-8888}
HDX_DEV_OTEL_JSON_HTTP_PORT=${HDX_DEV_OTEL_JSON_HTTP_PORT:-14318}
# Otel/Clickhouse config
HYPERDX_OTEL_EXPORTER_CLICKHOUSE_DATABASE=default

View file

@ -14,16 +14,16 @@ jobs:
matrix:
shard: [1, 2, 3, 4]
# E2E port configuration (slot 0 defaults)
# E2E port configuration (slot 0 defaults — must match scripts/test-e2e.sh)
env:
HDX_E2E_SLOT: '0'
HDX_E2E_OPAMP_PORT: '14320'
HDX_E2E_CH_PORT: '18123'
HDX_E2E_CH_NATIVE_PORT: '18223'
HDX_E2E_API_PORT: '19000'
HDX_E2E_MONGO_PORT: '39999'
HDX_E2E_APP_LOCAL_PORT: '48001'
HDX_E2E_APP_PORT: '48081'
HDX_E2E_OPAMP_PORT: '20320'
HDX_E2E_CH_PORT: '20500'
HDX_E2E_CH_NATIVE_PORT: '20600'
HDX_E2E_API_PORT: '21000'
HDX_E2E_MONGO_PORT: '21100'
HDX_E2E_APP_LOCAL_PORT: '21200'
HDX_E2E_APP_PORT: '21300'
steps:
- name: Checkout

1
.gitignore vendored
View file

@ -41,6 +41,7 @@ specs/
# Next.js build output
packages/app/.next
packages/app/.next-e2e
packages/app/.pnp
packages/app/.pnp.js
packages/app/.vercel

View file

@ -26,12 +26,19 @@ MongoDB (configuration/metadata)
```bash
yarn setup # Install dependencies
yarn dev # Start full stack (Docker + local services)
yarn dev # Start full stack with worktree-isolated ports
```
The project uses **Yarn 4.5.1** workspaces. Docker Compose manages ClickHouse,
MongoDB, and the OTel Collector.
**This repo is multi-agent friendly.** `yarn dev`, `make dev-int`, and
`make dev-e2e` all use slot-based port isolation so multiple worktrees can run
dev servers, integration tests, and E2E tests simultaneously without conflicts.
A dev portal at http://localhost:9900 auto-starts and shows all running stacks.
See [`agent_docs/development.md`](agent_docs/development.md) for the full
multi-worktree setup, port allocation tables, and available commands.
## Working on the Codebase (HOW)
**Before starting a task**, read relevant documentation from the `agent_docs/`

View file

@ -21,22 +21,31 @@ Service Descriptions:
Pre-requisites:
- Docker
- Node.js (`>=18.12.0`)
- Node.js (`>=22`)
- Yarn (v4)
You can get started by deploying a complete development stack in dev mode.
```bash
yarn run dev
yarn dev
```
This will start the Node.js API, Next.js frontend locally and the OpenTelemetry
collector and ClickHouse server in Docker.
Each worktree automatically gets unique ports so multiple developers (or agents)
can run `yarn dev` simultaneously without conflicts. A dev portal at
http://localhost:9900 auto-starts and shows all running stacks with their
assigned ports. Check the portal to find the URL for your instance.
To stop the stack:
```bash
yarn dev:down
```
To enable self-instrumentation and demo logs, you can set the `HYPERDX_API_KEY`
to your ingestion key (go to
[http://localhost:8080/team](http://localhost:8080/team) after creating your
account).
to your ingestion key (visit the Team settings page after creating your account).
To do this, create a `.env.local` file in the root of the project and add the
following:
@ -53,7 +62,9 @@ see them reflected in real-time.
### Volumes
The development stack mounts volumes locally for persisting storage under
`.volumes`. Clear this directory to reset ClickHouse and MongoDB storage.
`.volumes`. Each worktree gets its own volume directory (e.g.
`.volumes/ch_data_dev_89`). Clear the `.volumes` directory to reset ClickHouse
and MongoDB storage.
### Windows
@ -68,34 +79,38 @@ To develop from WSL, follow instructions
## Testing
All test environments use slot-based port isolation, so they can run
simultaneously with the dev stack and across multiple worktrees.
### E2E Tests
E2E tests run against a full local stack (MongoDB + ClickHouse + API). Docker must be running.
E2E tests run against a full local stack (MongoDB + ClickHouse + API). Docker
must be running.
```bash
# Run all E2E tests
./scripts/test-e2e.sh
make e2e
# Run a specific spec file
./scripts/test-e2e.sh --quiet packages/app/tests/e2e/features/<feature>.spec.ts
# Run a specific spec file (dev mode: hot reload, containers kept running)
make dev-e2e FILE=search
# Run a specific test by name
./scripts/test-e2e.sh --quiet packages/app/tests/e2e/features/<feature>.spec.ts --grep "\"test name\""
# Run with grep pattern
make dev-e2e FILE=search GREP="filter"
# Run via script directly for more control
./scripts/test-e2e.sh --ui --last-failed
```
Tests live in `packages/app/tests/e2e/`. Page objects are in `page-objects/`, shared components in `components/`.
Tests live in `packages/app/tests/e2e/`. Page objects are in `page-objects/`,
shared components in `components/`.
### Integration Tests
To run the tests locally, you can run the following command:
```bash
make dev-int
```
# Build dependencies (run once before first test run)
make dev-int-build
If you want to run a specific test file, you can run the following command:
```bash
# Run a specific test file
make dev-int FILE=checkAlerts
```

112
Makefile
View file

@ -25,6 +25,25 @@ HDX_CI_OPAMP_PORT:= $(shell echo $$((14320 + $(HDX_CI_SLOT))))
export HDX_CI_CH_PORT HDX_CI_MONGO_PORT HDX_CI_API_PORT HDX_CI_OPAMP_PORT
# Log directory for dev-portal visibility (integration tests)
HDX_CI_LOGS_DIR := $(HOME)/.config/hyperdx/dev-slots/$(HDX_CI_SLOT)/logs-int
HDX_CI_HISTORY_DIR := $(HOME)/.config/hyperdx/dev-slots/$(HDX_CI_SLOT)/history
# Archive integration logs to history (call at end of each test target)
# Usage: $(call archive-int-logs)
define archive-int-logs
if [ -d "$(HDX_CI_LOGS_DIR)" ] && [ -n "$$(ls -A $(HDX_CI_LOGS_DIR) 2>/dev/null)" ]; then \
_ts=$$(date -u +%Y-%m-%dT%H:%M:%SZ); \
_hist="$(HDX_CI_HISTORY_DIR)/int-$$_ts"; \
mkdir -p "$$_hist"; \
mv $(HDX_CI_LOGS_DIR)/* "$$_hist/" 2>/dev/null; \
_wt=$$(basename "$$(git rev-parse --show-toplevel 2>/dev/null || pwd)"); \
_br=$$(git rev-parse --abbrev-ref HEAD 2>/dev/null || echo "unknown"); \
printf '{"worktree":"%s","branch":"%s","worktreePath":"%s"}\n' "$$_wt" "$$_br" "$(CURDIR)" > "$$_hist/meta.json"; \
fi; \
rm -rf $(HDX_CI_LOGS_DIR) 2>/dev/null
endef
.PHONY: all
all: install-tools
@ -33,17 +52,55 @@ install-tools:
yarn setup
@echo "All tools installed"
# ---------------------------------------------------------------------------
# Dev environment with worktree isolation
# ---------------------------------------------------------------------------
# Ports are allocated in the 30100-31199 range (base + slot) to avoid
# conflicts with CI (14320-40098) and E2E (20320-21399) ports.
#
# Port mapping (base + slot):
# API server : 30100 + slot
# App (Next.js) : 30200 + slot
# OpAMP : 30300 + slot
# MongoDB : 30400 + slot
# ClickHouse HTTP : 30500 + slot
# ClickHouse Native : 30600 + slot
# OTel health : 30700 + slot
# OTel gRPC : 30800 + slot
# OTel HTTP : 30900 + slot
# OTel metrics : 31000 + slot
# OTel JSON HTTP : 31100 + slot
# ---------------------------------------------------------------------------
.PHONY: dev
dev:
yarn dev
.PHONY: dev-build
dev-build:
docker compose -f docker-compose.dev.yml build
bash -c '. ./scripts/dev-env.sh && docker compose -p "$$HDX_DEV_PROJECT" -f docker-compose.dev.yml build'
.PHONY: dev-up
dev-up:
npm run dev
yarn dev
.PHONY: dev-down
dev-down:
docker compose -f docker-compose.dev.yml down
yarn dev:down
.PHONY: dev-portal
dev-portal:
node scripts/dev-portal/server.js
.PHONY: dev-portal-stop
dev-portal-stop:
@pid=$$(lsof -ti :$${HDX_PORTAL_PORT:-9900} 2>/dev/null); \
if [ -n "$$pid" ]; then \
echo "Stopping dev portal (PID $$pid)"; \
kill $$pid 2>/dev/null || true; \
else \
echo "Dev portal is not running"; \
fi
.PHONY: dev-lint
dev-lint:
@ -57,6 +114,35 @@ ci-build:
ci-lint:
npx nx run-many -t ci:lint
.PHONY: dev-int-down
dev-int-down:
docker compose -p $(HDX_CI_PROJECT) -f ./docker-compose.ci.yml down
@for port in $(HDX_CI_API_PORT) $(HDX_CI_OPAMP_PORT); do \
pids=$$(lsof -ti :$$port 2>/dev/null); \
for pid in $$pids; do \
echo "Killing process $$pid on port $$port"; \
kill $$pid 2>/dev/null || true; \
done; \
done
@$(call archive-int-logs); true
.PHONY: dev-e2e-down
dev-e2e-down:
$(eval HDX_E2E_SLOT := $(shell printf '%s' "$(notdir $(CURDIR))" | cksum | awk '{print $$1 % 100}'))
docker compose -p e2e-$(HDX_E2E_SLOT) -f packages/app/tests/e2e/docker-compose.yml down -v
@for port in $$((21000 + $(HDX_E2E_SLOT))) $$((20320 + $(HDX_E2E_SLOT))) $$((21300 + $(HDX_E2E_SLOT))) $$((21200 + $(HDX_E2E_SLOT))); do \
pids=$$(lsof -ti :$$port 2>/dev/null); \
for pid in $$pids; do \
echo "Killing process $$pid on port $$port"; \
kill $$pid 2>/dev/null || true; \
done; \
done
.PHONY: dev-clean
dev-clean: dev-down dev-int-down dev-e2e-down dev-portal-stop
@rm -rf $(HOME)/.config/hyperdx/dev-slots
@echo "All dev services cleaned up"
.PHONY: dev-int-build
dev-int-build:
npx nx run-many -t ci:build
@ -65,23 +151,33 @@ dev-int-build:
.PHONY: dev-int
dev-int:
@echo "Using CI slot $(HDX_CI_SLOT) (project=$(HDX_CI_PROJECT) ch=$(HDX_CI_CH_PORT) mongo=$(HDX_CI_MONGO_PORT) api=$(HDX_CI_API_PORT))"
@mkdir -p $(HDX_CI_LOGS_DIR)
@bash scripts/ensure-dev-portal.sh
docker compose -p $(HDX_CI_PROJECT) -f ./docker-compose.ci.yml up -d
npx nx run @hyperdx/api:dev:int $(FILE); ret=$$?; \
bash -c 'set -o pipefail; npx nx run @hyperdx/api:dev:int $(FILE) 2>&1 | tee $(HDX_CI_LOGS_DIR)/api-int.log'; ret=$$?; \
docker compose -p $(HDX_CI_PROJECT) -f ./docker-compose.ci.yml down; \
$(call archive-int-logs); \
exit $$ret
.PHONY: dev-int-common-utils
dev-int-common-utils:
@echo "Using CI slot $(HDX_CI_SLOT) (project=$(HDX_CI_PROJECT) ch=$(HDX_CI_CH_PORT) mongo=$(HDX_CI_MONGO_PORT))"
@mkdir -p $(HDX_CI_LOGS_DIR)
@bash scripts/ensure-dev-portal.sh
docker compose -p $(HDX_CI_PROJECT) -f ./docker-compose.ci.yml up -d
npx nx run @hyperdx/common-utils:dev:int $(FILE)
docker compose -p $(HDX_CI_PROJECT) -f ./docker-compose.ci.yml down
bash -c 'set -o pipefail; npx nx run @hyperdx/common-utils:dev:int $(FILE) 2>&1 | tee $(HDX_CI_LOGS_DIR)/common-utils-int.log'; ret=$$?; \
docker compose -p $(HDX_CI_PROJECT) -f ./docker-compose.ci.yml down; \
$(call archive-int-logs); \
exit $$ret
.PHONY: ci-int
ci-int:
@mkdir -p $(HDX_CI_LOGS_DIR)
docker compose -p $(HDX_CI_PROJECT) -f ./docker-compose.ci.yml up -d --quiet-pull
npx nx run-many -t ci:int --parallel=false
docker compose -p $(HDX_CI_PROJECT) -f ./docker-compose.ci.yml down
bash -c 'set -o pipefail; npx nx run-many -t ci:int --parallel=false 2>&1 | tee $(HDX_CI_LOGS_DIR)/ci-int.log'; ret=$$?; \
docker compose -p $(HDX_CI_PROJECT) -f ./docker-compose.ci.yml down; \
$(call archive-int-logs); \
exit $$ret
.PHONY: dev-unit
dev-unit:

View file

@ -6,20 +6,18 @@
# Install dependencies and setup hooks
yarn setup
# Start full development stack (Docker + local services)
yarn dev
# Start full development stack (auto-assigns unique ports per worktree)
yarn dev # or equivalently: make dev
```
## Key Development Scripts
- `yarn app:dev`: Start API, frontend, alerts task, and common-utils in watch
mode
- `yarn dev` / `make dev`: Start full dev stack with worktree-isolated ports. A
dev portal at http://localhost:9900 auto-starts showing all running stacks.
- `yarn dev:down` / `make dev-down`: Stop the dev stack for the current worktree
- `make dev-portal`: Start the dev portal manually (auto-started by `yarn dev`)
- `yarn lint`: Run linting across all packages
- `yarn dev:int`: Run integration tests in watch mode
- `yarn dev:unit`: Run unit tests in watch mode (per package)
- `yarn test:e2e`: Run Playwright E2E tests (in `packages/app`)
- `yarn test:e2e:ci`: Run Playwright E2E tests in CI Docker environment (in
`packages/app`)
## Environment Configuration
@ -27,6 +25,65 @@ yarn dev
- Docker Compose manages ClickHouse, MongoDB, OTel Collector
- Hot reload enabled for all services in development
## Worktree Isolation (Multi-Agent / Multi-Developer)
When multiple git worktrees need to run the dev stack simultaneously (e.g.
multiple agents or developers working in parallel), use `make dev` instead of
`yarn dev`. This automatically assigns unique ports per worktree.
### How It Works
1. A deterministic slot (0-99) is computed from the worktree directory name (via
`cksum`)
2. Each service gets a unique port: `base + slot` (see table below)
3. Docker Compose runs with a unique project name (`hdx-dev-<slot>`)
4. Volume paths include the slot to prevent data corruption between worktrees
### Dev Port Mapping (base + slot)
Ports are allocated in the 30100-31199 range to avoid conflicts with CI
integration tests (14320-40098) and E2E tests (20320-21399).
| Service | Base Port | Range | Env Variable |
| ----------------- | --------- | ------------- | ----------------------------- |
| API server | 30100 | 30100 - 30199 | `HYPERDX_API_PORT` |
| App (Next.js) | 30200 | 30200 - 30299 | `HYPERDX_APP_PORT` |
| OpAMP | 30300 | 30300 - 30399 | `HYPERDX_OPAMP_PORT` |
| MongoDB | 30400 | 30400 - 30499 | `HDX_DEV_MONGO_PORT` |
| ClickHouse HTTP | 30500 | 30500 - 30599 | `HDX_DEV_CH_HTTP_PORT` |
| ClickHouse Native | 30600 | 30600 - 30699 | `HDX_DEV_CH_NATIVE_PORT` |
| OTel health | 30700 | 30700 - 30799 | `HDX_DEV_OTEL_HEALTH_PORT` |
| OTel gRPC | 30800 | 30800 - 30899 | `HDX_DEV_OTEL_GRPC_PORT` |
| OTel HTTP | 30900 | 30900 - 30999 | `HDX_DEV_OTEL_HTTP_PORT` |
| OTel metrics | 31000 | 31000 - 31099 | `HDX_DEV_OTEL_METRICS_PORT` |
| OTel JSON HTTP | 31100 | 31100 - 31199 | `HDX_DEV_OTEL_JSON_HTTP_PORT` |
### Dev Portal
The dev portal is a centralized web dashboard that discovers all running
worktree stacks by inspecting Docker container labels and slot files.
```bash
# Start the portal (runs on fixed port 9900)
make dev-portal
# Open in browser
open http://localhost:9900
```
The portal auto-refreshes every 3 seconds and shows each worktree's:
- Branch name and slot number
- All services with status (running/stopped) and clickable port links
- Separate cards for each active worktree
### Overriding the Slot
```bash
# Use a specific slot instead of the auto-computed one
HDX_DEV_SLOT=5 make dev
```
## Testing Strategy
### Testing Tools
@ -89,8 +146,21 @@ Port mapping (base + slot):
### E2E Testing
E2E tests use the same slot-based isolation pattern as integration tests, so
multiple agents can run E2E tests in parallel without port conflicts.
E2E tests use the same slot-based isolation pattern as integration tests, with
their own dedicated port range (20320-21399) so they can run simultaneously with
both the dev stack and CI integration tests.
E2E port mapping (base + slot):
| Service | Base Port | Range | Env Variable |
| ----------------- | --------- | ------------- | ------------------------ |
| OpAMP | 20320 | 20320 - 20419 | `HDX_E2E_OPAMP_PORT` |
| ClickHouse HTTP | 20500 | 20500 - 20599 | `HDX_E2E_CH_PORT` |
| ClickHouse Native | 20600 | 20600 - 20699 | `HDX_E2E_CH_NATIVE_PORT` |
| API server | 21000 | 21000 - 21099 | `HDX_E2E_API_PORT` |
| MongoDB | 21100 | 21100 - 21199 | `HDX_E2E_MONGO_PORT` |
| App (local) | 21200 | 21200 - 21299 | `HDX_E2E_APP_LOCAL_PORT` |
| App (fullstack) | 21300 | 21300 - 21399 | `HDX_E2E_APP_PORT` |
```bash
# Run all E2E tests
@ -120,21 +190,9 @@ HDX_E2E_SLOT=5 ./scripts/test-e2e.sh
range
- The slot and assigned ports are printed when E2E tests start
Port mapping (base + slot):
| Service | Default port (slot 0) | Variable |
| ----------------- | --------------------- | ---------------------- |
| OpAMP | 14320 | HDX_E2E_OPAMP_PORT |
| ClickHouse HTTP | 18123 | HDX_E2E_CH_PORT |
| ClickHouse Native | 18223 | HDX_E2E_CH_NATIVE_PORT |
| API server | 19000 | HDX_E2E_API_PORT |
| MongoDB | 39999 | HDX_E2E_MONGO_PORT |
| App (local) | 48001 | HDX_E2E_APP_LOCAL_PORT |
| App (fullstack) | 48081 | HDX_E2E_APP_PORT |
**Port range safety:** E2E shares the same base ports as `dev-int` (they never
run simultaneously). All ports are below the OS ephemeral range (49152) to avoid
conflicts with OrbStack and Docker networking.
**Port range safety:** E2E has its own dedicated port range (20320-21399) that
does not overlap with CI integration tests (14320-40098) or the dev stack
(30100-31199), so all three can run simultaneously from the same worktree.
## Common Development Tasks

View file

@ -1,26 +1,27 @@
name: hdx-oss-dev
x-hyperdx-logging: &hyperdx-logging
driver: fluentd
options:
fluentd-address: tcp://localhost:24225
fluentd-async: 'true'
labels: 'service.name'
name: ${HDX_DEV_PROJECT:-hdx-oss-dev}
x-hdx-labels: &hdx-labels
hdx.dev.slot: '${HDX_DEV_SLOT:-0}'
hdx.dev.branch: '${HDX_DEV_BRANCH:-unknown}'
hdx.dev.worktree: '${HDX_DEV_WORKTREE:-unknown}'
services:
db:
logging: *hyperdx-logging
labels:
service.name: 'hdx-oss-dev-db'
<<: *hdx-labels
hdx.dev.service: mongodb
hdx.dev.port: '${HDX_DEV_MONGO_PORT:-27017}'
image: mongo:5.0.32-focal
volumes:
- .volumes/db_dev:/data/db
- .volumes/db_dev_${HDX_DEV_SLOT:-0}:/data/db
ports:
- 27017:27017
- '${HDX_DEV_MONGO_PORT:-27017}:27017'
networks:
- internal
depends_on:
- otel-collector
otel-collector:
# image: otel/opentelemetry-collector-contrib:0.120.0
labels:
<<: *hdx-labels
hdx.dev.service: otel-collector
hdx.dev.port: '${HDX_DEV_OTEL_HTTP_PORT:-4318}'
hdx.dev.url: 'http://localhost:${HDX_DEV_OTEL_HTTP_PORT:-4318}'
build:
context: .
dockerfile: docker/otel-collector/Dockerfile
@ -31,7 +32,7 @@ services:
HYPERDX_OTEL_EXPORTER_CLICKHOUSE_DATABASE: ${HYPERDX_OTEL_EXPORTER_CLICKHOUSE_DATABASE}
HYPERDX_API_KEY: ${HYPERDX_API_KEY}
HYPERDX_LOG_LEVEL: ${HYPERDX_LOG_LEVEL}
OPAMP_SERVER_URL: 'http://host.docker.internal:${HYPERDX_OPAMP_PORT}'
OPAMP_SERVER_URL: 'http://host.docker.internal:${HYPERDX_OPAMP_PORT:-4320}'
CUSTOM_OTELCOL_CONFIG_FILE: '/etc/otelcol-contrib/custom.config.yaml'
# Uncomment to enable stdout logging for the OTel collector
OTEL_SUPERVISOR_LOGS: 'true'
@ -42,11 +43,10 @@ services:
# Add a custom config file
- ./docker/otel-collector/custom.config.yaml:/etc/otelcol-contrib/custom.config.yaml
ports:
- '13133:13133' # health_check extension
- '24225:24225' # fluentd receiver
- '4317:4317' # OTLP gRPC receiver
- '4318:4318' # OTLP http receiver
- '8888:8888' # metrics extension
- '${HDX_DEV_OTEL_HEALTH_PORT:-13133}:13133' # health_check extension
- '${HDX_DEV_OTEL_GRPC_PORT:-4317}:4317' # OTLP gRPC receiver
- '${HDX_DEV_OTEL_HTTP_PORT:-4318}:4318' # OTLP http receiver
- '${HDX_DEV_OTEL_METRICS_PORT:-8888}:8888' # metrics extension
restart: always
networks:
- internal
@ -54,7 +54,11 @@ services:
ch-server:
condition: service_healthy
otel-collector-json:
# image: otel/opentelemetry-collector-contrib:0.120.0
labels:
<<: *hdx-labels
hdx.dev.service: otel-collector-json
hdx.dev.port: '${HDX_DEV_OTEL_JSON_HTTP_PORT:-14318}'
hdx.dev.url: 'http://localhost:${HDX_DEV_OTEL_JSON_HTTP_PORT:-14318}'
build:
context: .
dockerfile: docker/otel-collector/Dockerfile
@ -66,7 +70,7 @@ services:
HYPERDX_OTEL_EXPORTER_CREATE_LEGACY_SCHEMA: 'true'
HYPERDX_API_KEY: ${HYPERDX_API_KEY}
HYPERDX_LOG_LEVEL: ${HYPERDX_LOG_LEVEL}
OPAMP_SERVER_URL: 'http://host.docker.internal:${HYPERDX_OPAMP_PORT}'
OPAMP_SERVER_URL: 'http://host.docker.internal:${HYPERDX_OPAMP_PORT:-4320}'
CUSTOM_OTELCOL_CONFIG_FILE: '/etc/otelcol-contrib/custom.config.yaml'
# Uncomment to enable stdout logging for the OTel collector
OTEL_SUPERVISOR_LOGS: 'true'
@ -77,7 +81,7 @@ services:
- ./docker/otel-collector/config.yaml:/etc/otelcol-contrib/config.yaml
- ./docker/otel-collector/supervisor_docker.yaml.tmpl:/etc/otel/supervisor.yaml.tmpl
ports:
- '14318:4318' # OTLP http receiver
- '${HDX_DEV_OTEL_JSON_HTTP_PORT:-14318}:4318' # OTLP http receiver
restart: always
networks:
- internal
@ -85,10 +89,15 @@ services:
ch-server:
condition: service_healthy
ch-server:
labels:
<<: *hdx-labels
hdx.dev.service: clickhouse
hdx.dev.port: '${HDX_DEV_CH_HTTP_PORT:-8123}'
hdx.dev.url: 'http://localhost:${HDX_DEV_CH_HTTP_PORT:-8123}'
image: clickhouse/clickhouse-server:26.1-alpine
ports:
- 8123:8123 # http api
- 9000:9000 # native
- '${HDX_DEV_CH_HTTP_PORT:-8123}:8123' # http api
- '${HDX_DEV_CH_NATIVE_PORT:-9000}:9000' # native
environment:
# default settings
CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT: 1
@ -99,13 +108,12 @@ services:
volumes:
- ./docker/clickhouse/local/config.xml:/etc/clickhouse-server/config.xml
- ./docker/clickhouse/local/users.xml:/etc/clickhouse-server/users.xml
- .volumes/ch_data_dev:/var/lib/clickhouse
- .volumes/ch_logs_dev:/var/log/clickhouse-server
- .volumes/ch_data_dev_${HDX_DEV_SLOT:-0}:/var/lib/clickhouse
- .volumes/ch_logs_dev_${HDX_DEV_SLOT:-0}:/var/log/clickhouse-server
restart: on-failure
networks:
- internal
healthcheck:
# "clickhouse", "client", "-u ${CLICKHOUSE_USER}", "--password ${CLICKHOUSE_PASSWORD}", "-q 'SELECT 1'"
test:
wget -O /dev/null --no-verbose --tries=1 http://127.0.0.1:8123/ping ||
exit 1

View file

@ -34,16 +34,16 @@
"scripts": {
"setup": "yarn install && husky install",
"build:common-utils": "nx run @hyperdx/common-utils:dev:build",
"app:dev": "concurrently -k -n 'API,APP,ALERTS-TASK,COMMON-UTILS' -c 'green.bold,blue.bold,yellow.bold,magenta' 'nx run @hyperdx/api:dev' 'nx run @hyperdx/app:dev' 'nx run @hyperdx/api:dev-task check-alerts' 'nx run @hyperdx/common-utils:dev'",
"app:dev": "concurrently -k -n 'API,APP,ALERTS-TASK,COMMON-UTILS' -c 'green.bold,blue.bold,yellow.bold,magenta' 'nx run @hyperdx/api:dev 2>&1 | tee ${HDX_DEV_LOGS_DIR:+\"$HDX_DEV_LOGS_DIR/api.log\"}' 'nx run @hyperdx/app:dev 2>&1 | tee ${HDX_DEV_LOGS_DIR:+\"$HDX_DEV_LOGS_DIR/app.log\"}' 'nx run @hyperdx/api:dev-task check-alerts 2>&1 | tee ${HDX_DEV_LOGS_DIR:+\"$HDX_DEV_LOGS_DIR/alerts.log\"}' 'nx run @hyperdx/common-utils:dev 2>&1 | tee ${HDX_DEV_LOGS_DIR:+\"$HDX_DEV_LOGS_DIR/common-utils.log\"}'",
"app:dev:local": "concurrently -k -n 'APP,COMMON-UTILS' -c 'blue.bold,magenta' 'nx run @hyperdx/app:dev:local' 'nx run @hyperdx/common-utils:dev'",
"app:lint": "nx run @hyperdx/app:ci:lint",
"app:storybook": "nx run @hyperdx/app:storybook",
"build:clickhouse": "nx run @hyperdx/common-utils:build && nx run @hyperdx/app:build:clickhouse",
"run:clickhouse": "nx run @hyperdx/app:run:clickhouse",
"dev": "yarn build:common-utils && dotenvx run --convention=nextjs -- docker compose -f docker-compose.dev.yml up -d && yarn app:dev && docker compose -f docker-compose.dev.yml down",
"dev": "sh -c '. ./scripts/dev-env.sh && yarn build:common-utils && dotenvx run --convention=nextjs -- docker compose -p \"$HDX_DEV_PROJECT\" -f docker-compose.dev.yml up -d && yarn app:dev; dotenvx run --convention=nextjs -- docker compose -p \"$HDX_DEV_PROJECT\" -f docker-compose.dev.yml down'",
"dev:local": "IS_LOCAL_APP_MODE='DANGEROUSLY_is_local_app_mode💀' yarn dev",
"dev:down": "docker compose -f docker-compose.dev.yml down",
"dev:compose": "docker compose -f docker-compose.dev.yml",
"dev:down": "sh -c '. ./scripts/dev-env.sh && docker compose -p \"$HDX_DEV_PROJECT\" -f docker-compose.dev.yml down && sh ./scripts/dev-kill-ports.sh'",
"dev:compose": "sh -c '. ./scripts/dev-env.sh && docker compose -p \"$HDX_DEV_PROJECT\" -f docker-compose.dev.yml'",
"knip": "knip",
"knip:ci": "knip --reporter json",
"lint": "npx nx run-many -t ci:lint",

View file

@ -1,7 +1,8 @@
HYPERDX_API_KEY="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
HYPERDX_API_PORT=8000
HYPERDX_OPAMP_PORT=4320
HYPERDX_APP_PORT=8080
# Ports are overridden by scripts/dev-env.sh for worktree isolation
HYPERDX_API_PORT=${HYPERDX_API_PORT:-8000}
HYPERDX_OPAMP_PORT=${HYPERDX_OPAMP_PORT:-4320}
HYPERDX_APP_PORT=${HYPERDX_APP_PORT:-8080}
HYPERDX_LOG_LEVEL=debug
EXPRESS_SESSION_SECRET="hyperdx is cool 👋"
FRONTEND_URL="http://localhost:${HYPERDX_APP_PORT}"
@ -11,18 +12,18 @@ HDX_NODE_CONSOLE_CAPTURE=1
HYPERDX_API_KEY=${HYPERDX_API_KEY}
HYPERDX_LOG_LEVEL=${HYPERDX_LOG_LEVEL}
MINER_API_URL="http://localhost:5123"
MONGO_URI="mongodb://localhost:27017/hyperdx"
MONGO_URI="mongodb://localhost:${HDX_DEV_MONGO_PORT:-27017}/hyperdx"
NODE_ENV=development
OTEL_SERVICE_NAME="hdx-oss-dev-api"
OTEL_RESOURCE_ATTRIBUTES="service.version=dev"
OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318"
OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:${HDX_DEV_OTEL_HTTP_PORT:-4318}"
PORT=${HYPERDX_API_PORT}
OPAMP_PORT=${HYPERDX_OPAMP_PORT}
REDIS_URL=redis://localhost:6379
USAGE_STATS_ENABLED=false
NODE_OPTIONS="--max-http-header-size=131072"
ENABLE_SWAGGER=true
DEFAULT_CONNECTIONS=[{"name":"Local ClickHouse","host":"http://localhost:8123","username":"default","password":""}]
DEFAULT_CONNECTIONS=[{"name":"Local ClickHouse","host":"http://localhost:${HDX_DEV_CH_HTTP_PORT:-8123}","username":"default","password":""}]
DEFAULT_SOURCES=[{"from":{"databaseName":"default","tableName":"otel_logs"},"kind":"log","timestampValueExpression":"TimestampTime","name":"Logs","displayedTimestampValueExpression":"Timestamp","implicitColumnExpression":"Body","serviceNameExpression":"ServiceName","bodyExpression":"Body","eventAttributesExpression":"LogAttributes","resourceAttributesExpression":"ResourceAttributes","defaultTableSelectExpression":"Timestamp,ServiceName,SeverityText,Body","severityTextExpression":"SeverityText","traceIdExpression":"TraceId","spanIdExpression":"SpanId","connection":"Local ClickHouse","traceSourceId":"Traces","sessionSourceId":"Sessions","metricSourceId":"Metrics"},{"from":{"databaseName":"default","tableName":"otel_traces"},"kind":"trace","timestampValueExpression":"Timestamp","name":"Traces","displayedTimestampValueExpression":"Timestamp","implicitColumnExpression":"SpanName","serviceNameExpression":"ServiceName","eventAttributesExpression":"SpanAttributes","resourceAttributesExpression":"ResourceAttributes","defaultTableSelectExpression":"Timestamp,ServiceName,StatusCode,round(Duration/1e6),SpanName","traceIdExpression":"TraceId","spanIdExpression":"SpanId","durationExpression":"Duration","durationPrecision":9,"parentSpanIdExpression":"ParentSpanId","spanNameExpression":"SpanName","spanKindExpression":"SpanKind","statusCodeExpression":"StatusCode","statusMessageExpression":"StatusMessage","connection":"Local ClickHouse","logSourceId":"Logs","sessionSourceId":"Sessions","metricSourceId":"Metrics"},{"from":{"databaseName":"default","tableName":""},"kind":"metric","timestampValueExpression":"TimeUnix","name":"Metrics","resourceAttributesExpression":"ResourceAttributes","metricTables":{"gauge":"otel_metrics_gauge","histogram":"otel_metrics_histogram","sum":"otel_metrics_sum","_id":"682586a8b1f81924e628e808","id":"682586a8b1f81924e628e808"},"connection":"Local ClickHouse","logSourceId":"Logs","traceSourceId":"Traces","sessionSourceId":"Sessions"},{"from":{"databaseName":"default","tableName":"hyperdx_sessions"},"kind":"session","timestampValueExpression":"TimestampTime","name":"Sessions","displayedTimestampValueExpression":"Timestamp","implicitColumnExpression":"Body","serviceNameExpression":"ServiceName","bodyExpression":"Body","eventAttributesExpression":"LogAttributes","resourceAttributesExpression":"ResourceAttributes","defaultTableSelectExpression":"Timestamp,ServiceName,SeverityText,Body","severityTextExpression":"SeverityText","traceIdExpression":"TraceId","spanIdExpression":"SpanId","connection":"Local ClickHouse","logSourceId":"Logs","traceSourceId":"Traces","metricSourceId":"Metrics"},{"from":{"databaseName":"otel_json","tableName":"otel_logs"},"kind":"log","timestampValueExpression":"Timestamp","name":"JSON Logs","displayedTimestampValueExpression":"Timestamp","implicitColumnExpression":"Body","serviceNameExpression":"ServiceName","bodyExpression":"Body","eventAttributesExpression":"LogAttributes","resourceAttributesExpression":"ResourceAttributes","defaultTableSelectExpression":"Timestamp,ServiceName,SeverityText,Body","severityTextExpression":"SeverityText","traceIdExpression":"TraceId","spanIdExpression":"SpanId","connection":"Local ClickHouse","traceSourceId":"JSON Traces","metricSourceId":"JSON Metrics"},{"from":{"databaseName":"otel_json","tableName":"otel_traces"},"kind":"trace","timestampValueExpression":"Timestamp","name":"JSON Traces","displayedTimestampValueExpression":"Timestamp","implicitColumnExpression":"SpanName","serviceNameExpression":"ServiceName","eventAttributesExpression":"SpanAttributes","resourceAttributesExpression":"ResourceAttributes","defaultTableSelectExpression":"Timestamp,ServiceName,StatusCode,round(Duration/1e6),SpanName","traceIdExpression":"TraceId","spanIdExpression":"SpanId","durationExpression":"Duration","durationPrecision":9,"parentSpanIdExpression":"ParentSpanId","spanNameExpression":"SpanName","spanKindExpression":"SpanKind","statusCodeExpression":"StatusCode","statusMessageExpression":"StatusMessage","connection":"Local ClickHouse","logSourceId":"JSON Logs","metricSourceId":"JSON Metrics"},{"from":{"databaseName":"otel_json","tableName":""},"kind":"metric","timestampValueExpression":"TimeUnix","name":"JSON Metrics","resourceAttributesExpression":"ResourceAttributes","metricTables":{"gauge":"otel_metrics_gauge","histogram":"otel_metrics_histogram","sum":"otel_metrics_sum"},"connection":"Local ClickHouse","logSourceId":"JSON Logs","traceSourceId":"JSON Traces"}]
INGESTION_API_KEY="super-secure-ingestion-api-key"
HYPERDX_API_KEY=$INGESTION_API_KEY

View file

@ -5,16 +5,16 @@
# See scripts/test-e2e.sh for slot-based port assignment.
# ClickHouse connection to local e2e test instance
CLICKHOUSE_HOST=http://localhost:${HDX_E2E_CH_PORT:-18123}
CLICKHOUSE_HOST=http://localhost:${HDX_E2E_CH_PORT:-20500}
CLICKHOUSE_PASSWORD=
CLICKHOUSE_USER=default
RUN_SCHEDULED_TASKS_EXTERNALLY=true
FRONTEND_URL=http://localhost:${HDX_E2E_APP_PORT:-48081}
FRONTEND_URL=http://localhost:${HDX_E2E_APP_PORT:-21300}
# MongoDB connection string
MONGO_URI=mongodb://localhost:${HDX_E2E_MONGO_PORT:-39999}/hyperdx-e2e
MONGO_URI=mongodb://localhost:${HDX_E2E_MONGO_PORT:-21100}/hyperdx-e2e
NODE_ENV=test
PORT=${HDX_E2E_API_PORT:-19000}
OPAMP_PORT=${HDX_E2E_OPAMP_PORT:-14320}
PORT=${HDX_E2E_API_PORT:-21000}
OPAMP_PORT=${HDX_E2E_OPAMP_PORT:-20320}
# DEFAULT_CONNECTIONS and DEFAULT_SOURCES are injected from packages/app/tests/e2e/fixtures/e2e-fixtures.json
# by the e2e API runner (run-e2e.js) and by base-test for local mode.

View file

@ -1,10 +1,11 @@
HYPERDX_API_KEY="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
HYPERDX_API_PORT=8000
HYPERDX_APP_PORT=8080
# Ports are overridden by scripts/dev-env.sh for worktree isolation
HYPERDX_API_PORT=${HYPERDX_API_PORT:-8000}
HYPERDX_APP_PORT=${HYPERDX_APP_PORT:-8080}
SERVER_URL="http://localhost:${HYPERDX_API_PORT}"
NODE_ENV=development
OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318"
OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:${HDX_DEV_OTEL_HTTP_PORT:-4318}"
OTEL_SERVICE_NAME="hdx-oss-dev-app"
PORT=${HYPERDX_APP_PORT}
NODE_OPTIONS="--max-http-header-size=131072"
NEXT_PUBLIC_HYPERDX_BASE_PATH=
NEXT_PUBLIC_HYPERDX_BASE_PATH=

View file

@ -20,6 +20,9 @@ configureRuntimeEnv();
const basePath = process.env.NEXT_PUBLIC_HYPERDX_BASE_PATH;
const nextConfig = {
// Allow overriding the build/dev output directory to avoid lock conflicts
// when running dev and E2E simultaneously (e.g. NEXT_DIST_DIR=.next-e2e)
...(process.env.NEXT_DIST_DIR ? { distDir: process.env.NEXT_DIST_DIR } : {}),
reactCompiler: true,
basePath: basePath,
env: {

View file

@ -8,9 +8,9 @@ const USE_DEV = process.env.E2E_USE_DEV === 'true';
const AUTH_FILE = path.join(__dirname, 'tests/e2e/.auth/user.json');
// Port configuration (set by scripts/test-e2e.sh via HDX_E2E_* env vars)
const API_PORT = process.env.HDX_E2E_API_PORT || '19000';
const APP_PORT = process.env.HDX_E2E_APP_PORT || '48081';
const APP_LOCAL_PORT = process.env.HDX_E2E_APP_LOCAL_PORT || '48001';
const API_PORT = process.env.HDX_E2E_API_PORT || '21000';
const APP_PORT = process.env.HDX_E2E_APP_PORT || '21300';
const APP_LOCAL_PORT = process.env.HDX_E2E_APP_LOCAL_PORT || '21200';
// Timeout configuration constants (in milliseconds)
const TEST_TIMEOUT_MS = 60 * 1000; // 60 seconds per test
@ -90,8 +90,8 @@ export default defineConfig({
{
// Full UI: Alerts + Dashboards. Not local mode; Alerts enabled;
command: USE_DEV
? `SERVER_URL=http://localhost:${API_PORT} PORT=${APP_PORT} next dev --webpack`
: `SERVER_URL=http://localhost:${API_PORT} PORT=${APP_PORT} yarn build && SERVER_URL=http://localhost:${API_PORT} PORT=${APP_PORT} yarn start`,
? `SERVER_URL=http://localhost:${API_PORT} PORT=${APP_PORT} NEXT_DIST_DIR=.next-e2e next dev --webpack`
: `SERVER_URL=http://localhost:${API_PORT} PORT=${APP_PORT} NEXT_DIST_DIR=.next-e2e yarn build && SERVER_URL=http://localhost:${API_PORT} PORT=${APP_PORT} NEXT_DIST_DIR=.next-e2e yarn start`,
port: parseInt(APP_PORT, 10),
reuseExistingServer: !process.env.CI,
timeout: APP_SERVER_STARTUP_TIMEOUT_MS,
@ -102,8 +102,8 @@ export default defineConfig({
: {
// Local mode: Frontend only
command: USE_DEV
? `NEXT_PUBLIC_IS_LOCAL_MODE=true PORT=${APP_LOCAL_PORT} next dev --webpack`
: `NEXT_PUBLIC_IS_LOCAL_MODE=true yarn build && NEXT_PUBLIC_IS_LOCAL_MODE=true PORT=${APP_LOCAL_PORT} yarn start`,
? `NEXT_PUBLIC_IS_LOCAL_MODE=true PORT=${APP_LOCAL_PORT} NEXT_DIST_DIR=.next-e2e next dev --webpack`
: `NEXT_PUBLIC_IS_LOCAL_MODE=true NEXT_DIST_DIR=.next-e2e yarn build && NEXT_PUBLIC_IS_LOCAL_MODE=true PORT=${APP_LOCAL_PORT} NEXT_DIST_DIR=.next-e2e yarn start`,
port: parseInt(APP_LOCAL_PORT, 10),
reuseExistingServer: !process.env.CI,
timeout: APP_SERVER_STARTUP_TIMEOUT_MS,

View file

@ -61,7 +61,7 @@ const env = {
};
// Port configuration from HDX_E2E_* env vars (set by scripts/test-e2e.sh)
const chPort = env.HDX_E2E_CH_PORT || '18123';
const chPort = env.HDX_E2E_CH_PORT || '20500';
// Ensure CLICKHOUSE_HOST is set for seed-clickhouse.ts (used by both modes)
env.CLICKHOUSE_HOST = `http://localhost:${chPort}`;

View file

@ -3,7 +3,7 @@ services:
db:
image: mongo:5.0.32-focal
ports:
- ${HDX_E2E_MONGO_PORT:-39999}:27017
- ${HDX_E2E_MONGO_PORT:-21100}:27017
networks:
- internal
deploy:
@ -17,8 +17,8 @@ services:
ch-server:
image: clickhouse/clickhouse-server:26.1-alpine
ports:
- ${HDX_E2E_CH_PORT:-18123}:8123 # http api
- ${HDX_E2E_CH_NATIVE_PORT:-18223}:9000 # native
- ${HDX_E2E_CH_PORT:-20500}:8123 # http api
- ${HDX_E2E_CH_NATIVE_PORT:-20600}:9000 # native
environment:
CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT: 1
volumes:

View file

@ -36,9 +36,9 @@ const DEFAULT_TEST_USER = {
} as const;
// Port configuration from HDX_E2E_* env vars (set by scripts/test-e2e.sh)
const API_PORT = process.env.HDX_E2E_API_PORT || '19000';
const APP_PORT = process.env.HDX_E2E_APP_PORT || '48081';
const MONGO_PORT = process.env.HDX_E2E_MONGO_PORT || '39999';
const API_PORT = process.env.HDX_E2E_API_PORT || '21000';
const APP_PORT = process.env.HDX_E2E_APP_PORT || '21300';
const MONGO_PORT = process.env.HDX_E2E_MONGO_PORT || '21100';
const API_URL = process.env.E2E_API_URL || `http://localhost:${API_PORT}`;
const APP_URL = process.env.E2E_APP_URL || `http://localhost:${APP_PORT}`;

View file

@ -18,7 +18,7 @@ interface ClickHouseConfig {
const DEFAULT_CONFIG: ClickHouseConfig = {
host:
process.env.CLICKHOUSE_HOST ||
`http://localhost:${process.env.HDX_E2E_CH_PORT || '18123'}`,
`http://localhost:${process.env.HDX_E2E_CH_PORT || '20500'}`,
user: process.env.CLICKHOUSE_USER || 'default',
password: process.env.CLICKHOUSE_PASSWORD || '',
};

View file

@ -6,7 +6,7 @@
*/
import { Page } from '@playwright/test';
const API_PORT = process.env.HDX_E2E_API_PORT || '19000';
const API_PORT = process.env.HDX_E2E_API_PORT || '21000';
const API_URL = process.env.E2E_API_URL || `http://localhost:${API_PORT}`;
/**

153
scripts/dev-env.sh Executable file
View file

@ -0,0 +1,153 @@
#!/usr/bin/env bash
# ---------------------------------------------------------------------------
# Dev environment isolation helper
# ---------------------------------------------------------------------------
# Computes a deterministic port offset (HDX_DEV_SLOT, 0-99) from the current
# working directory name so that multiple git worktrees can run the full dev
# stack in parallel without port conflicts.
#
# Usage:
# source scripts/dev-env.sh # export env vars into current shell
# . scripts/dev-env.sh # same thing, POSIX style
#
# Override the slot manually:
# HDX_DEV_SLOT=5 . scripts/dev-env.sh
#
# Port allocation scheme — each service gets a 100-wide band in the 30100-31199
# range, well clear of:
# - CI integration test ports (14320-14419, 18123-18222, 19000-19099, 39999-40098)
# - E2E test ports (20320-21399)
# - Default dev ports (4317-4320, 8000, 8080, 8123, 8888, 9000, 13133, 14318, 27017)
# - OS ephemeral ports (32768+ Linux, 49152+ macOS)
#
# Port mapping (base + slot):
# API server : 30100 + slot (HYPERDX_API_PORT)
# App (Next.js) : 30200 + slot (HYPERDX_APP_PORT)
# OpAMP : 30300 + slot (HYPERDX_OPAMP_PORT)
# MongoDB : 30400 + slot (HDX_DEV_MONGO_PORT)
# ClickHouse HTTP : 30500 + slot (HDX_DEV_CH_HTTP_PORT)
# ClickHouse Native : 30600 + slot (HDX_DEV_CH_NATIVE_PORT)
# OTel health : 30700 + slot (HDX_DEV_OTEL_HEALTH_PORT)
# OTel gRPC : 30800 + slot (HDX_DEV_OTEL_GRPC_PORT)
# OTel HTTP : 30900 + slot (HDX_DEV_OTEL_HTTP_PORT)
# OTel metrics : 31000 + slot (HDX_DEV_OTEL_METRICS_PORT)
# OTel JSON HTTP : 31100 + slot (HDX_DEV_OTEL_JSON_HTTP_PORT)
# ---------------------------------------------------------------------------
# Compute slot from directory name (same algorithm as CI slot in Makefile)
HDX_DEV_SLOT="${HDX_DEV_SLOT:-$(printf '%s' "$(basename "$PWD")" | cksum | awk '{print $1 % 100}')}"
# Git metadata for portal labels
HDX_DEV_BRANCH="$(git rev-parse --abbrev-ref HEAD 2>/dev/null || echo 'unknown')"
HDX_DEV_WORKTREE="$(basename "$PWD")"
# --- Application ports (30100-30399) ---
HYPERDX_API_PORT=$((30100 + HDX_DEV_SLOT))
HYPERDX_APP_PORT=$((30200 + HDX_DEV_SLOT))
HYPERDX_OPAMP_PORT=$((30300 + HDX_DEV_SLOT))
# --- Docker service ports (30400-31199) ---
HDX_DEV_MONGO_PORT=$((30400 + HDX_DEV_SLOT))
HDX_DEV_CH_HTTP_PORT=$((30500 + HDX_DEV_SLOT))
HDX_DEV_CH_NATIVE_PORT=$((30600 + HDX_DEV_SLOT))
HDX_DEV_OTEL_HEALTH_PORT=$((30700 + HDX_DEV_SLOT))
HDX_DEV_OTEL_GRPC_PORT=$((30800 + HDX_DEV_SLOT))
HDX_DEV_OTEL_HTTP_PORT=$((30900 + HDX_DEV_SLOT))
HDX_DEV_OTEL_METRICS_PORT=$((31000 + HDX_DEV_SLOT))
HDX_DEV_OTEL_JSON_HTTP_PORT=$((31100 + HDX_DEV_SLOT))
# --- Docker Compose project name (unique per slot) ---
HDX_DEV_PROJECT="hdx-dev-${HDX_DEV_SLOT}"
# Export everything
export HDX_DEV_SLOT
export HDX_DEV_BRANCH
export HDX_DEV_WORKTREE
export HYPERDX_API_PORT
export HYPERDX_APP_PORT
export HYPERDX_OPAMP_PORT
export HDX_DEV_MONGO_PORT
export HDX_DEV_CH_HTTP_PORT
export HDX_DEV_CH_NATIVE_PORT
export HDX_DEV_OTEL_HEALTH_PORT
export HDX_DEV_OTEL_GRPC_PORT
export HDX_DEV_OTEL_HTTP_PORT
export HDX_DEV_OTEL_METRICS_PORT
export HDX_DEV_OTEL_JSON_HTTP_PORT
export HDX_DEV_PROJECT
# --- Clean up stale Next.js state from previous sessions ---
# Nuke the entire .next directory to avoid stale webpack bundles, lock files,
# and cached module resolutions after common-utils rebuilds.
rm -rf "${PWD}/packages/app/.next" 2>/dev/null || true
# --- Set up directories for portal discovery + logs ---
HDX_DEV_SLOTS_DIR="${HOME}/.config/hyperdx/dev-slots"
HDX_DEV_LOGS_DIR="${HDX_DEV_SLOTS_DIR}/${HDX_DEV_SLOT}/logs"
mkdir -p "$HDX_DEV_LOGS_DIR"
export HDX_DEV_SLOTS_DIR
export HDX_DEV_LOGS_DIR
cat > "${HDX_DEV_SLOTS_DIR}/${HDX_DEV_SLOT}.json" <<EOF
{
"slot": ${HDX_DEV_SLOT},
"branch": "${HDX_DEV_BRANCH}",
"worktree": "${HDX_DEV_WORKTREE}",
"worktreePath": "${PWD}",
"apiPort": ${HYPERDX_API_PORT},
"appPort": ${HYPERDX_APP_PORT},
"opampPort": ${HYPERDX_OPAMP_PORT},
"mongoPort": ${HDX_DEV_MONGO_PORT},
"chHttpPort": ${HDX_DEV_CH_HTTP_PORT},
"chNativePort": ${HDX_DEV_CH_NATIVE_PORT},
"otelHttpPort": ${HDX_DEV_OTEL_HTTP_PORT},
"otelGrpcPort": ${HDX_DEV_OTEL_GRPC_PORT},
"otelJsonHttpPort": ${HDX_DEV_OTEL_JSON_HTTP_PORT},
"logsDir": "${HDX_DEV_LOGS_DIR}",
"pid": $$,
"startedAt": "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
}
EOF
# --- Start dev portal in background if port 9900 is free ---
# shellcheck source=./ensure-dev-portal.sh
source "${BASH_SOURCE[0]%/*}/ensure-dev-portal.sh"
# Clean up slot file and archive logs on exit
_hdx_cleanup_slot() {
if [ -n "$HDX_PORTAL_PID" ] && kill -0 "$HDX_PORTAL_PID" 2>/dev/null; then
kill "$HDX_PORTAL_PID" 2>/dev/null || true
fi
rm -f "${HDX_DEV_SLOTS_DIR}/${HDX_DEV_SLOT}.json" 2>/dev/null || true
# Archive logs to history instead of deleting
if [ -d "$HDX_DEV_LOGS_DIR" ] && [ -n "$(ls -A "$HDX_DEV_LOGS_DIR" 2>/dev/null)" ]; then
_ts=$(date -u +%Y-%m-%dT%H:%M:%SZ)
_hist="${HDX_DEV_SLOTS_DIR}/${HDX_DEV_SLOT}/history/dev-${_ts}"
mkdir -p "$_hist"
mv "$HDX_DEV_LOGS_DIR"/* "$_hist/" 2>/dev/null || true
cat > "$_hist/meta.json" <<METAEOF
{"worktree":"${HDX_DEV_WORKTREE}","branch":"${HDX_DEV_BRANCH}","worktreePath":"${PWD}"}
METAEOF
fi
rm -rf "$HDX_DEV_LOGS_DIR" 2>/dev/null || true
}
trap _hdx_cleanup_slot EXIT
# Print summary
echo "╔══════════════════════════════════════════════════════════════╗"
echo "║ HyperDX Dev Environment — Slot ${HDX_DEV_SLOT}$(printf '%*s' $((27 - ${#HDX_DEV_SLOT})) '')"
echo "╠══════════════════════════════════════════════════════════════╣"
echo "║ Branch: ${HDX_DEV_BRANCH}$(printf '%*s' $((45 - ${#HDX_DEV_BRANCH})) '')"
echo "║ Worktree: ${HDX_DEV_WORKTREE}$(printf '%*s' $((45 - ${#HDX_DEV_WORKTREE})) '')"
echo "╠══════════════════════════════════════════════════════════════╣"
echo "║ App (Next.js) http://localhost:${HYPERDX_APP_PORT}$(printf '%*s' $((22 - ${#HYPERDX_APP_PORT})) '')"
echo "║ API http://localhost:${HYPERDX_API_PORT}$(printf '%*s' $((22 - ${#HYPERDX_API_PORT})) '')"
echo "║ ClickHouse http://localhost:${HDX_DEV_CH_HTTP_PORT}$(printf '%*s' $((22 - ${#HDX_DEV_CH_HTTP_PORT})) '')"
echo "║ MongoDB localhost:${HDX_DEV_MONGO_PORT}$(printf '%*s' $((29 - ${#HDX_DEV_MONGO_PORT})) '')"
echo "║ OTel HTTP http://localhost:${HDX_DEV_OTEL_HTTP_PORT}$(printf '%*s' $((22 - ${#HDX_DEV_OTEL_HTTP_PORT})) '')"
echo "║ OTel gRPC localhost:${HDX_DEV_OTEL_GRPC_PORT}$(printf '%*s' $((29 - ${#HDX_DEV_OTEL_GRPC_PORT})) '')"
echo "║ OpAMP localhost:${HYPERDX_OPAMP_PORT}$(printf '%*s' $((29 - ${#HYPERDX_OPAMP_PORT})) '')"
echo "╠══════════════════════════════════════════════════════════════╣"
echo "║ Portal: http://localhost:${HDX_PORTAL_PORT}$(printf '%*s' $((28 - ${#HDX_PORTAL_PORT})) '')"
echo "╚══════════════════════════════════════════════════════════════╝"

38
scripts/dev-kill-ports.sh Executable file
View file

@ -0,0 +1,38 @@
#!/usr/bin/env sh
# ---------------------------------------------------------------------------
# Kill orphaned dev processes on slot-specific ports
# Sourced after dev-env.sh so HYPERDX_API_PORT, HYPERDX_APP_PORT, etc. are set
# ---------------------------------------------------------------------------
PORTS="$HYPERDX_API_PORT $HYPERDX_APP_PORT $HYPERDX_OPAMP_PORT"
killed=0
for port in $PORTS; do
[ -z "$port" ] && continue
pids=$(lsof -ti :"$port" 2>/dev/null)
for pid in $pids; do
echo "Killing process $pid on port $port"
kill "$pid" 2>/dev/null && killed=$((killed + 1))
done
done
# Also kill the dev portal if running
if [ -n "$HDX_PORTAL_PORT" ]; then
pids=$(lsof -ti :"$HDX_PORTAL_PORT" 2>/dev/null)
for pid in $pids; do
echo "Killing dev portal (pid $pid) on port $HDX_PORTAL_PORT"
kill "$pid" 2>/dev/null && killed=$((killed + 1))
done
fi
# Clean up slot file
if [ -n "$HDX_DEV_SLOTS_DIR" ] && [ -n "$HDX_DEV_SLOT" ]; then
rm -f "${HDX_DEV_SLOTS_DIR}/${HDX_DEV_SLOT}.json" 2>/dev/null || true
rm -rf "${HDX_DEV_SLOTS_DIR}/${HDX_DEV_SLOT}" 2>/dev/null || true
fi
if [ "$killed" -gt 0 ]; then
echo "Killed $killed orphaned process(es)"
else
echo "No orphaned dev processes found"
fi

View file

@ -0,0 +1,857 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>HyperDX Dev Portal</title>
<link
rel="icon"
type="image/svg+xml"
href="data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='260' height='260' fill='none' viewBox='0 0 260 260'%3E%3Cdefs%3E%3CclipPath id='a'%3E%3Crect width='260' height='260' fill='%23fff' rx='40'/%3E%3C/clipPath%3E%3C/defs%3E%3Cg clip-path='url(%23a)'%3E%3Cpath fill='%2325E2A5' d='M242.166 65v130l-112.583 65.001L17 195V65L129.583 0l112.583 65Z'/%3E%3Cpath fill='%231E1E1E' d='M157.698 42.893c1.754 1.284 2.573 3.92 1.976 6.361l-15.713 64.274h28.994c1.74 0 3.314 1.302 4.004 3.313.691 2.011.366 4.346-.827 5.941l-69.801 93.342c-1.391 1.86-3.617 2.267-5.371.984-1.753-1.285-2.572-3.921-1.976-6.362l15.713-64.274H85.704c-1.74 0-3.315-1.302-4.005-3.312-.69-2.011-.365-4.346.828-5.942l69.801-93.342c1.39-1.86 3.616-2.267 5.37-.983Z'/%3E%3C/g%3E%3C/svg%3E"
/>
<link rel="stylesheet" href="/styles.css" />
</head>
<body>
<div
id="modal-overlay"
class="modal-overlay"
style="display: none"
onclick="if(event.target===this)closeModal(false)"
>
<div class="modal-box">
<h3 id="modal-title">Confirm</h3>
<p id="modal-message">Are you sure?</p>
<div class="modal-actions">
<button class="modal-cancel" onclick="closeModal(false)">
Cancel
</button>
<button
class="modal-danger"
id="modal-confirm-btn"
onclick="closeModal(true)"
>
Delete
</button>
</div>
</div>
</div>
<div class="layout">
<div class="main-panel">
<div class="header">
<h1>
<svg
class="logo"
width="28"
height="28"
viewBox="0 0 512 512"
fill="none"
xmlns="http://www.w3.org/2000/svg"
>
<g clip-path="url(#lc)">
<path
d="M256 0L477.703 128V384L256 512L34.2975 384V128L256 0Z"
fill="var(--accent)"
/>
<path
d="M311.365 84.4663C314.818 86.9946 316.431 92.1862 315.256 96.9926L284.313 223.563H341.409C344.836 223.563 347.936 226.127 349.295 230.086C350.655 234.046 350.014 238.644 347.665 241.786L210.211 425.598C207.472 429.26 203.089 430.062 199.635 427.534C196.182 425.005 194.569 419.814 195.744 415.007L226.686 288.437H169.591C166.164 288.437 163.064 285.873 161.705 281.914C160.345 277.954 160.986 273.356 163.335 270.214L300.789 86.4023C303.528 82.7403 307.911 81.938 311.365 84.4663Z"
fill="var(--bg)"
/>
</g>
<defs>
<clipPath id="lc">
<rect width="512" height="512" fill="white" />
</clipPath>
</defs>
</svg>
HyperDX Dev Portal
</h1>
<div class="status">
<div class="dot"></div>
<span id="refresh-status">Auto-refreshing every 3s</span>
</div>
</div>
<div class="tab-bar">
<button class="tab active" id="tab-live" onclick="switchTab('live')">
Live
</button>
<button class="tab" id="tab-history" onclick="switchTab('history')">
History
</button>
</div>
<div id="error-banner" class="error-banner"></div>
<div id="content"></div>
<div id="history-content" style="display: none"></div>
</div>
<div id="log-panel" class="log-panel">
<div class="log-panel-header">
<h3>
<span id="log-title">Logs</span>
<span id="log-slot-label" class="slot-label"></span>
</h3>
<div class="log-panel-actions">
<span
id="log-stream-badge"
class="log-streaming-badge"
style="display: none"
>
<span class="stream-dot"></span> Live
</span>
<button
onclick="toggleAutoScroll()"
id="autoscroll-btn"
title="Toggle auto-scroll"
>
Auto-scroll
</button>
<button onclick="clearLogPanel()" title="Clear log view">
Clear
</button>
<button onclick="closeLogPanel()" class="close-btn" title="Close">
&times;
</button>
</div>
</div>
<div id="log-content" class="log-content"></div>
</div>
</div>
<script>
const contentEl = document.getElementById('content');
const historyEl = document.getElementById('history-content');
const errorEl = document.getElementById('error-banner');
const logPanel = document.getElementById('log-panel');
const logContent = document.getElementById('log-content');
const logTitle = document.getElementById('log-title');
const logSlotLabel = document.getElementById('log-slot-label');
const logStreamBadge = document.getElementById('log-stream-badge');
let activeTab = 'live';
let currentLogSlot = null;
let currentLogService = null;
let currentLogEnvType = null;
let currentEventSource = null;
let autoScroll = true;
const MAX_LOG_LINES = 5000;
let logBuffer = [];
let rafPending = false;
// --- ANSI escape code to HTML converter ---
const ANSI_COLORS = {
30: '#666',
31: '#f85149',
32: '#3fb950',
33: '#d29922',
34: '#58a6ff',
35: '#bc8cff',
36: '#76e3ea',
37: '#e6edf3',
39: null, // default
90: '#7d8590',
91: '#ff7b72',
92: '#56d364',
93: '#e3b341',
94: '#79c0ff',
95: '#d2a8ff',
96: '#a5d6ff',
97: '#ffffff',
};
function ansiToHtml(text) {
let html = '';
let i = 0;
let openSpans = 0;
while (i < text.length) {
// Match ESC[ ... m sequences
if (text[i] === '\x1b' && text[i + 1] === '[') {
const end = text.indexOf('m', i + 2);
if (end === -1) {
i++;
continue;
}
const codes = text
.substring(i + 2, end)
.split(';')
.map(Number);
i = end + 1;
for (const code of codes) {
if (code === 0 || code === 22 || code === 39) {
// Reset / unbold / default color — close open spans
while (openSpans > 0) {
html += '</span>';
openSpans--;
}
} else if (code === 1) {
html += '<span style="font-weight:bold">';
openSpans++;
} else if (code === 2) {
html += '<span style="opacity:0.6">';
openSpans++;
} else if (code === 3) {
html += '<span style="font-style:italic">';
openSpans++;
} else if (ANSI_COLORS[code] !== undefined) {
const color = ANSI_COLORS[code];
if (color) {
html += `<span style="color:${color}">`;
openSpans++;
}
} else if (code === 38) {
// 256-color or RGB — parse 38;2;r;g;b
const rgbIdx = codes.indexOf(38);
if (codes[rgbIdx + 1] === 2 && codes.length >= rgbIdx + 5) {
const r = codes[rgbIdx + 2],
g = codes[rgbIdx + 3],
b = codes[rgbIdx + 4];
html += `<span style="color:rgb(${r},${g},${b})">`;
openSpans++;
}
break; // consumed remaining codes
}
}
} else {
// Escape HTML special chars
const ch = text[i];
if (ch === '<') html += '&lt;';
else if (ch === '>') html += '&gt;';
else if (ch === '&') html += '&amp;';
else html += ch;
i++;
}
}
while (openSpans > 0) {
html += '</span>';
openSpans--;
}
return html;
}
function flushLogBuffer() {
rafPending = false;
if (logBuffer.length === 0) return;
const fragment = document.createDocumentFragment();
for (const text of logBuffer) {
const div = document.createElement('div');
div.className = 'log-line';
div.innerHTML = ansiToHtml(text);
fragment.appendChild(div);
}
logBuffer.length = 0;
logContent.appendChild(fragment);
// Prune oldest lines if over cap
const overflow = logContent.children.length - MAX_LOG_LINES;
if (overflow > 0) {
for (let i = 0; i < overflow; i++) {
logContent.removeChild(logContent.firstChild);
}
}
if (autoScroll) {
logContent.scrollTop = logContent.scrollHeight;
}
}
function appendLogLine(text) {
logBuffer.push(text);
if (!rafPending) {
rafPending = true;
requestAnimationFrame(flushLogBuffer);
}
}
function serviceDisplayName(name) {
const map = {
app: 'App (Next.js)',
api: 'API Server',
clickhouse: 'ClickHouse',
mongodb: 'MongoDB',
'otel-collector': 'OTel Collector',
'otel-collector-json': 'OTel JSON',
alerts: 'Alerts Task',
'common-utils': 'Common Utils',
'e2e-runner': 'E2E Runner',
};
return map[name] || name;
}
function envTypeLabel(envType) {
const map = { dev: 'Dev', e2e: 'E2E', int: 'Integration' };
return map[envType] || envType;
}
// --- Log panel ---
function openLogPanel(slot, service, envType) {
envType = envType || 'dev';
// Close existing stream
if (currentEventSource) {
currentEventSource.close();
currentEventSource = null;
}
currentLogSlot = slot;
currentLogService = service;
currentLogEnvType = envType;
logTitle.textContent = serviceDisplayName(service);
logSlotLabel.textContent = `${envTypeLabel(envType)} \u00b7 slot ${slot}`;
logContent.innerHTML = '';
appendLogLine('Loading logs...');
logPanel.classList.add('open');
logStreamBadge.style.display = 'none';
// Highlight active row
document
.querySelectorAll('.services-table tr.active')
.forEach(el => el.classList.remove('active'));
const activeRow = document.querySelector(
`tr[data-slot="${slot}"][data-service="${service}"][data-env="${envType}"]`,
);
if (activeRow) activeRow.classList.add('active');
// Start SSE stream
const eventSource = new EventSource(
`/api/logs/${envType}/${slot}/${encodeURIComponent(service)}?stream=1`,
);
currentEventSource = eventSource;
let firstMessage = true;
eventSource.onmessage = event => {
if (firstMessage) {
logContent.innerHTML = '';
logStreamBadge.style.display = 'inline-flex';
firstMessage = false;
}
appendLogLine(event.data);
};
eventSource.addEventListener('close', () => {
logStreamBadge.style.display = 'none';
appendLogLine('--- stream ended ---');
});
eventSource.onerror = () => {
logStreamBadge.style.display = 'none';
if (firstMessage) {
logContent.innerHTML = '';
appendLogLine('No logs available for this service.');
}
eventSource.close();
};
}
function closeLogPanel() {
if (currentEventSource) {
currentEventSource.close();
currentEventSource = null;
}
logBuffer.length = 0;
logPanel.classList.remove('open');
currentLogSlot = null;
currentLogService = null;
currentLogEnvType = null;
document
.querySelectorAll('.services-table tr.active')
.forEach(el => el.classList.remove('active'));
}
function clearLogPanel() {
logContent.innerHTML = '';
logBuffer.length = 0;
}
function toggleAutoScroll() {
autoScroll = !autoScroll;
const btn = document.getElementById('autoscroll-btn');
btn.style.color = autoScroll ? 'var(--green)' : 'var(--text-muted)';
btn.style.borderColor = autoScroll ? 'var(--green)' : 'var(--border)';
if (autoScroll) {
logContent.scrollTop = logContent.scrollHeight;
}
}
// --- Dashboard rendering ---
function renderStacks(stacks) {
if (stacks.length === 0) {
return `
<div class="empty-state">
<h2>No stacks running</h2>
<p>Start an environment from a worktree:</p>
<br>
<code>make dev</code> &nbsp; <code>make dev-e2e</code> &nbsp; <code>make dev-int</code>
</div>
`;
}
// Group stacks by worktree name so all envs for the same
// worktree appear in a single card.
const groups = new Map();
for (const stack of stacks) {
const key = stack.worktree || 'unknown';
if (!groups.has(key)) groups.set(key, []);
groups.get(key).push(stack);
}
return `<div class="stacks">${[...groups.values()].map(renderWorktreeCard).join('')}</div>`;
}
function renderWorktreeCard(stacks) {
// Use the dev stack for the header since it has the richest metadata.
const header = stacks.find(s => s.envType === 'dev') || stacks[0];
const devStack = stacks.find(s => s.envType === 'dev');
const appService =
devStack &&
devStack.services.find(s => s.name === 'app' && s.status === 'up');
const appUrl = appService ? appService.url : null;
// Flatten all services from every env into one table.
// Insert a separator row before each env group.
const envOrder = ['dev', 'e2e', 'int'];
const sorted = [...stacks].sort(
(a, b) =>
(envOrder.indexOf(a.envType) ?? 9) -
(envOrder.indexOf(b.envType) ?? 9),
);
let rows = '';
for (const stack of sorted) {
const envType = stack.envType || 'dev';
// Env group separator row
rows += `
<tr class="env-separator">
<td colspan="4">
<span class="env-badge ${envType}">${envTypeLabel(envType)}</span>
</td>
</tr>`;
rows += stack.services
.map(svc => renderService(stack.slot, svc, envType))
.join('');
}
return `
<div class="stack-card">
<div class="stack-header">
<div>
<div style="display:flex;align-items:center">
<span class="branch">${escapeHtml(header.branch)}</span>
</div>
<div class="worktree">${escapeHtml(header.worktree)}</div>
</div>
<div class="stack-actions">
${appUrl ? `<a href="${appUrl}" target="_blank" class="open-btn">Open App</a>` : ''}
</div>
</div>
<table class="services-table">
<thead>
<tr>
<th>Service</th>
<th>Status</th>
<th>Port</th>
<th></th>
</tr>
</thead>
<tbody>
${rows}
</tbody>
</table>
</div>
`;
}
function renderService(slot, svc, envType) {
envType = envType || 'dev';
const portCell = svc.port
? `<span class="port-plain">:${svc.port}</span>`
: `<span class="port-plain">-</span>`;
const isActive =
currentLogSlot == slot &&
currentLogService === svc.name &&
currentLogEnvType === envType;
return `
<tr class="clickable ${isActive ? 'active' : ''}"
data-slot="${slot}" data-service="${svc.name}" data-env="${envType}"
onclick="openLogPanel(${slot}, '${svc.name}', '${envType}')">
<td>
<div class="service-name">
${serviceDisplayName(svc.name)}
<span class="type-badge ${svc.type}">${svc.type}</span>
</div>
</td>
<td>
<div class="status-indicator">
<div class="status-dot ${svc.status}"></div>
${svc.status === 'up' ? 'Running' : 'Stopped'}
</div>
</td>
<td>${portCell}</td>
<td><button class="log-btn" onclick="event.stopPropagation(); openLogPanel(${slot}, '${svc.name}', '${envType}')">Logs</button></td>
</tr>
`;
}
function escapeHtml(str) {
return str
.replace(/&/g, '&amp;')
.replace(/</g, '&lt;')
.replace(/>/g, '&gt;')
.replace(/"/g, '&quot;');
}
// --- Tab switching ---
function switchTab(tab) {
activeTab = tab;
document
.getElementById('tab-live')
.classList.toggle('active', tab === 'live');
document
.getElementById('tab-history')
.classList.toggle('active', tab === 'history');
if (tab === 'live') {
contentEl.style.display = '';
historyEl.style.display = 'none';
refresh();
} else {
contentEl.style.display = 'none';
historyEl.style.display = '';
refreshHistory();
}
}
// --- History rendering ---
function timeAgo(date) {
const seconds = Math.floor((Date.now() - date.getTime()) / 1000);
if (seconds < 0) return 'just now';
if (seconds < 60) return 'just now';
if (seconds < 3600) return `${Math.floor(seconds / 60)}m ago`;
if (seconds < 86400) return `${Math.floor(seconds / 3600)}h ago`;
return `${Math.floor(seconds / 86400)}d ago`;
}
function formatSize(bytes) {
if (bytes < 1024) return `${bytes} B`;
if (bytes < 1048576) return `${(bytes / 1024).toFixed(1)} KB`;
return `${(bytes / 1048576).toFixed(1)} MB`;
}
let historySearchQuery = '';
let _allHistoryEntries = [];
function onHistorySearch(el) {
historySearchQuery = el.value;
historyExpanded.clear();
const cursorPos = el.selectionStart;
// Re-render from cached entries without re-fetching
historyEl.innerHTML = renderHistory(_allHistoryEntries);
// Restore focus and cursor position after re-render
const input = historyEl.querySelector('.history-search input');
if (input) {
input.focus();
input.setSelectionRange(cursorPos, cursorPos);
}
}
async function refreshHistory() {
try {
const res = await fetch('/api/history');
if (!res.ok) throw new Error(`HTTP ${res.status}`);
_allHistoryEntries = await res.json();
historyEl.innerHTML = renderHistory(_allHistoryEntries);
errorEl.style.display = 'none';
} catch (err) {
errorEl.textContent = `Failed to fetch history: ${err.message}`;
errorEl.style.display = 'block';
}
}
// Simple fuzzy match: all query terms (space-separated) must appear
// somewhere in the searchable text (case-insensitive).
function fuzzyMatch(query, texts) {
if (!query) return true;
const haystack = texts.join(' ').toLowerCase();
const terms = query.toLowerCase().split(/\s+/).filter(Boolean);
return terms.every(term => haystack.includes(term));
}
function filterHistoryEntries(entries, query) {
if (!query) return entries;
return entries.filter(entry =>
fuzzyMatch(query, [entry.worktree || '', entry.branch || '']),
);
}
function renderHistory(entries) {
const searchHtml = `
<div class="history-search">
<span class="history-search-icon">\u{1F50D}</span>
<input type="text"
placeholder="Search worktree or branch..."
value="${escapeHtml(historySearchQuery)}"
oninput="onHistorySearch(this)" />
</div>
`;
if (entries.length === 0) {
return `
<div class="empty-state">
<h2>No past runs</h2>
<p>Logs from completed dev, E2E, and integration runs will appear here.</p>
</div>
`;
}
const filtered = filterHistoryEntries(entries, historySearchQuery);
// Group filtered entries by worktree
const groups = new Map();
for (const entry of filtered) {
const key = entry.worktree || 'unknown';
if (!groups.has(key)) groups.set(key, []);
groups.get(key).push(entry);
}
const noResults = filtered.length === 0;
return `
<div class="history-header">
<h2>Past Runs</h2>
<button class="clear-all-btn" onclick="clearAllHistory()">Clear All</button>
</div>
${searchHtml}
${
noResults
? `<div class="empty-state"><p>No runs matching "${escapeHtml(historySearchQuery)}"</p></div>`
: `<div class="stacks">
${[...groups.entries()].map(([worktree, group]) => renderHistoryWorktreeCard(worktree, group)).join('')}
</div>`
}
`;
}
// Track which history cards are expanded (by worktree name).
// All cards are collapsed by default.
const historyExpanded = new Set();
function toggleHistoryCard(worktree) {
if (historyExpanded.has(worktree)) {
historyExpanded.delete(worktree);
} else {
historyExpanded.add(worktree);
}
refreshHistory();
}
function renderHistoryWorktreeCard(worktree, entries) {
// Use the first entry for branch info
const branch = entries[0].branch || 'unknown';
const collapsed = !historyExpanded.has(worktree);
const chevron = collapsed ? '\u25b6' : '\u25bc';
const count = entries.length;
return `
<div class="stack-card">
<div class="stack-header">
<div>
<div style="display:flex;align-items:center">
<span class="branch">${escapeHtml(branch)}</span>
<span style="color:var(--text-muted);font-size:12px;margin-left:8px">${count} run${count !== 1 ? 's' : ''}</span>
</div>
<div class="worktree">${escapeHtml(worktree)}</div>
</div>
<div class="stack-actions">
<button class="history-toggle-btn" onclick="toggleHistoryCard('${escapeHtml(worktree)}')">${chevron}</button>
</div>
</div>
<div class="history-card-body${collapsed ? ' collapsed' : ''}" style="max-height:${collapsed ? 0 : entries.length * 200}px">
<div class="history-list" style="padding:0">
${entries.map(renderHistoryEntry).join('')}
</div>
</div>
</div>
`;
}
function renderHistoryEntry(entry) {
const relTime = timeAgo(new Date(entry.timestamp));
return `
<div class="history-entry">
<div class="history-entry-header">
<div class="history-meta">
<span class="env-badge ${entry.envType}">${envTypeLabel(entry.envType)}</span>
<span class="history-time">${relTime}</span>
<span class="history-ts">${escapeHtml(entry.timestamp)}</span>
</div>
<button class="history-delete-btn" onclick="event.stopPropagation(); deleteHistoryEntry(${entry.slot}, '${escapeHtml(entry.dir)}')">Delete</button>
</div>
<div class="file-list">
${entry.files
.map(
f => `
<div class="file-item"
onclick="viewHistoryLog(${entry.slot}, '${escapeHtml(entry.dir)}', '${escapeHtml(f)}', '${entry.envType}')">
<span class="file-name">${escapeHtml(f)}</span>
<span class="file-size">${formatSize(entry.totalSize)}</span>
</div>
`,
)
.join('')}
</div>
</div>
`;
}
async function viewHistoryLog(slot, dir, file, envType) {
// Close any existing SSE stream
if (currentEventSource) {
currentEventSource.close();
currentEventSource = null;
}
currentLogSlot = null;
currentLogService = null;
currentLogEnvType = null;
logTitle.textContent = file;
logSlotLabel.textContent = `${envTypeLabel(envType)} \u00b7 ${timeAgo(new Date(dir.replace(/^(dev|e2e|int)-/, '')))}`;
logContent.innerHTML = '';
appendLogLine('Loading archived log...');
logPanel.classList.add('open');
logStreamBadge.style.display = 'none';
// Highlight active file item
document
.querySelectorAll('.file-item.active')
.forEach(el => el.classList.remove('active'));
try {
const res = await fetch(
`/api/history/${slot}/${encodeURIComponent(dir)}/${encodeURIComponent(file)}`,
);
const text = await res.text();
logContent.innerHTML = '';
for (const line of text.split('\n')) {
appendLogLine(line);
}
} catch {
logContent.innerHTML = '';
appendLogLine('Failed to load log file.');
}
}
async function deleteHistoryEntry(slot, dir) {
const ok = await showModal(
'Delete run?',
'This will permanently remove the log files for this run.',
'Delete',
);
if (!ok) return;
await fetch(`/api/history/${slot}/${encodeURIComponent(dir)}`, {
method: 'DELETE',
});
closeLogPanel();
refreshHistory();
}
async function clearAllHistory() {
const ok = await showModal(
'Clear all history?',
'This will permanently delete all past run logs. This cannot be undone.',
'Delete All',
);
if (!ok) return;
try {
const res = await fetch('/api/history');
const entries = await res.json();
await Promise.all(
entries.map(e =>
fetch(`/api/history/${e.slot}/${encodeURIComponent(e.dir)}`, {
method: 'DELETE',
}),
),
);
} catch {
// ignore errors
}
closeLogPanel();
refreshHistory();
}
// --- Custom confirm modal ---
let _modalResolve = null;
function showModal(title, message, confirmLabel) {
return new Promise(resolve => {
_modalResolve = resolve;
document.getElementById('modal-title').textContent = title;
document.getElementById('modal-message').textContent = message;
document.getElementById('modal-confirm-btn').textContent =
confirmLabel || 'Confirm';
const overlay = document.getElementById('modal-overlay');
overlay.style.display = 'flex';
requestAnimationFrame(() => overlay.classList.add('visible'));
});
}
function closeModal(result) {
const overlay = document.getElementById('modal-overlay');
overlay.classList.remove('visible');
setTimeout(() => {
overlay.style.display = 'none';
}, 150);
if (_modalResolve) {
_modalResolve(result);
_modalResolve = null;
}
}
// --- Live dashboard ---
async function refresh() {
try {
const res = await fetch('/api/stacks');
if (!res.ok) throw new Error(`HTTP ${res.status}`);
const stacks = await res.json();
contentEl.innerHTML = renderStacks(stacks);
errorEl.style.display = 'none';
} catch (err) {
errorEl.textContent = `Failed to fetch stacks: ${err.message}`;
errorEl.style.display = 'block';
}
}
// Initial load + periodic refresh (only refresh active tab)
refresh();
setInterval(() => {
if (activeTab === 'live') refresh();
}, 3000);
// Set initial auto-scroll button state
toggleAutoScroll();
toggleAutoScroll();
// Keybindings
document.addEventListener('keydown', e => {
if (e.key === 'Escape') {
if (_modalResolve) {
closeModal(false);
} else {
closeLogPanel();
}
}
});
</script>
</body>
</html>

970
scripts/dev-portal/server.js Executable file
View file

@ -0,0 +1,970 @@
#!/usr/bin/env node
// ---------------------------------------------------------------------------
// HyperDX Dev Portal — Centralized dashboard for all local environments
// ---------------------------------------------------------------------------
// Discovers running environments by:
// 1. Querying Docker for containers belonging to known Compose projects:
// - Dev stacks (project: hdx-dev-<slot>)
// - E2E test stacks (project: e2e-<slot>)
// - CI int stacks (project: int-<slot>)
// 2. Reading slot files from ~/.config/hyperdx/dev-slots/*.json
// (for non-Docker local dev services like API, App, alerts)
//
// Usage:
// node scripts/dev-portal/server.js # default port 9900
// HDX_PORTAL_PORT=9901 node scripts/dev-portal/server.js
//
// Zero external dependencies — uses only Node.js built-ins.
// ---------------------------------------------------------------------------
const http = require('node:http');
const { execSync, spawn } = require('node:child_process');
const fs = require('node:fs');
const path = require('node:path');
const net = require('node:net');
const url = require('node:url');
const PORT = parseInt(process.env.HDX_PORTAL_PORT || '9900', 10);
const SLOTS_DIR = path.join(
process.env.HOME || process.env.USERPROFILE || '/tmp',
'.config',
'hyperdx',
'dev-slots',
);
// ---------------------------------------------------------------------------
// Docker discovery
// ---------------------------------------------------------------------------
// Recognised Docker Compose project prefixes and their environment type.
// Dev containers also carry hdx.dev.* labels; E2E and CI containers only
// carry the standard com.docker.compose.* labels.
const PROJECT_PREFIX_TO_ENV = {
'hdx-dev-': 'dev',
'e2e-': 'e2e',
'int-': 'int',
};
function discoverDockerContainers() {
try {
// Fetch ALL running containers — we filter by project prefix in JS so
// that a single `docker ps` call covers dev, E2E and CI environments.
const raw = execSync('docker ps --format "{{json .}}"', {
encoding: 'utf-8',
timeout: 5000,
});
return raw
.trim()
.split('\n')
.filter(Boolean)
.map(line => {
try {
return JSON.parse(line);
} catch {
return null;
}
})
.filter(Boolean)
.filter(c => {
// Keep only containers whose project matches a known prefix
const labels = parseContainerLabels(c);
const project = labels['com.docker.compose.project'] || '';
return Object.keys(PROJECT_PREFIX_TO_ENV).some(prefix =>
project.startsWith(prefix),
);
});
} catch {
return [];
}
}
function parseContainerLabels(container) {
// Docker --format "{{json .}}" gives us Labels as a comma-separated string
const labels = {};
const labelsStr = container.Labels || '';
// Labels look like: "hdx.dev.slot=89,hdx.dev.service=clickhouse,com.docker.compose.service=ch-server,..."
labelsStr.split(',').forEach(pair => {
const eqIdx = pair.indexOf('=');
if (eqIdx > 0) {
labels[pair.substring(0, eqIdx)] = pair.substring(eqIdx + 1);
}
});
return labels;
}
function parsePortMappings(portsStr) {
// Ports look like: "0.0.0.0:30589->8123/tcp, 0.0.0.0:30689->9000/tcp"
const mappings = [];
if (!portsStr) return mappings;
const parts = portsStr.split(',').map(s => s.trim());
for (const part of parts) {
const match = part.match(/(?:[\d.]+:)?(\d+)->(\d+)\/(\w+)/);
if (match) {
mappings.push({
hostPort: parseInt(match[1], 10),
containerPort: parseInt(match[2], 10),
protocol: match[3],
});
}
}
return mappings;
}
// ---------------------------------------------------------------------------
// Slot file discovery (for API/App local processes)
// ---------------------------------------------------------------------------
function discoverSlotFiles() {
const slots = {};
try {
if (!fs.existsSync(SLOTS_DIR)) return slots;
const files = fs.readdirSync(SLOTS_DIR).filter(f => f.endsWith('.json'));
for (const file of files) {
try {
const content = fs.readFileSync(path.join(SLOTS_DIR, file), 'utf-8');
const data = JSON.parse(content);
if (data.slot !== undefined) {
// Check if the PID is still alive
if (data.pid) {
try {
process.kill(data.pid, 0); // signal 0 = check existence
data.processAlive = true;
} catch {
data.processAlive = false;
}
}
slots[data.slot] = data;
}
} catch {
// skip malformed files
}
}
} catch {
// slots dir doesn't exist yet
}
return slots;
}
// ---------------------------------------------------------------------------
// TCP port probe (check if a port is listening)
// ---------------------------------------------------------------------------
function probePort(port) {
return new Promise(resolve => {
const socket = new net.Socket();
socket.setTimeout(300);
socket.once('connect', () => {
socket.destroy();
resolve(true);
});
socket.once('timeout', () => {
socket.destroy();
resolve(false);
});
socket.once('error', () => {
socket.destroy();
resolve(false);
});
socket.connect(port, '127.0.0.1');
});
}
// ---------------------------------------------------------------------------
// Derive environment type and slot from a Docker Compose project name
// ---------------------------------------------------------------------------
function parseProject(projectName) {
for (const [prefix, envType] of Object.entries(PROJECT_PREFIX_TO_ENV)) {
if (projectName.startsWith(prefix)) {
const slot = projectName.slice(prefix.length);
return { envType, slot };
}
}
return null;
}
// Canonical service name from a compose service name.
// Dev containers carry hdx.dev.service; E2E/CI containers only have the
// compose service name (ch-server, db, otel-collector, …).
const COMPOSE_SERVICE_ALIASES = {
'ch-server': 'clickhouse',
db: 'mongodb',
};
function canonicalServiceName(labels) {
if (labels['hdx.dev.service']) return labels['hdx.dev.service'];
const composeName = labels['com.docker.compose.service'] || '';
return COMPOSE_SERVICE_ALIASES[composeName] || composeName;
}
// Resolve the git repository root for a directory. Returns the absolute path
// or null if not inside a git repo. Results are cached in gitRootCache.
const gitRootCache = new Map();
function resolveGitRoot(dir) {
if (!dir) return null;
if (gitRootCache.has(dir)) return gitRootCache.get(dir);
let root = null;
try {
root =
execSync('git rev-parse --show-toplevel', {
encoding: 'utf-8',
timeout: 3000,
cwd: dir,
stdio: ['ignore', 'pipe', 'ignore'],
}).trim() || null;
} catch {
// not a git repo
}
gitRootCache.set(dir, root);
return root;
}
// Resolve git branch for a working directory. Cached per request cycle via
// the branchCache map passed in from the caller.
function resolveGitBranch(workingDir, branchCache) {
if (!workingDir) return 'unknown';
if (branchCache.has(workingDir)) return branchCache.get(workingDir);
let branch = 'unknown';
try {
branch =
execSync('git rev-parse --abbrev-ref HEAD', {
encoding: 'utf-8',
timeout: 3000,
cwd: workingDir,
stdio: ['ignore', 'pipe', 'ignore'],
}).trim() || 'unknown';
} catch {
// not a git repo or git not available
}
branchCache.set(workingDir, branch);
return branch;
}
// ---------------------------------------------------------------------------
// Aggregate all data into a unified view
// ---------------------------------------------------------------------------
async function buildDashboardData() {
const containers = discoverDockerContainers();
const slotFiles = discoverSlotFiles();
// Group Docker containers by a unique key: `${envType}-${slot}`
const stackMap = {};
const branchCache = new Map();
function ensureStack(key, slot, envType) {
if (!stackMap[key]) {
stackMap[key] = {
slot: parseInt(slot, 10),
envType,
branch: 'unknown',
worktree: 'unknown',
worktreePath: '',
services: [],
};
}
return stackMap[key];
}
for (const container of containers) {
const labels = parseContainerLabels(container);
const project = labels['com.docker.compose.project'] || '';
const parsed = parseProject(project);
if (!parsed) continue;
const { envType, slot } = parsed;
const key = `${envType}-${slot}`;
const stack = ensureStack(key, slot, envType);
// Dev containers carry hdx.dev.* labels with branch/worktree info.
// E2E and CI containers only have standard compose labels, so we
// derive worktree from the working_dir and resolve the git branch.
if (envType === 'dev') {
if (labels['hdx.dev.branch']) stack.branch = labels['hdx.dev.branch'];
if (labels['hdx.dev.worktree'])
stack.worktree = labels['hdx.dev.worktree'];
}
// For all envTypes: fall back to compose working_dir when missing.
// The working_dir may be a subdirectory (e.g. packages/app/tests/e2e
// for E2E containers), so resolve up to the git repo root.
const workingDir = labels['com.docker.compose.project.working_dir'] || '';
if (workingDir && stack.worktree === 'unknown') {
const repoRoot = resolveGitRoot(workingDir) || workingDir;
stack.worktree = path.basename(repoRoot);
stack.worktreePath = repoRoot;
}
if (workingDir && stack.branch === 'unknown') {
stack.branch = resolveGitBranch(workingDir, branchCache);
}
const ports = parsePortMappings(container.Ports);
const mainPort = labels['hdx.dev.port']
? parseInt(labels['hdx.dev.port'], 10)
: ports.length > 0
? ports[0].hostPort
: null;
stack.services.push({
name: canonicalServiceName(labels),
type: 'docker',
status: container.State === 'running' ? 'up' : 'down',
port: mainPort,
url: labels['hdx.dev.url'] || null,
ports,
containerId: container.ID,
uptime: container.RunningFor || '',
});
}
// Merge slot file data (API/App local processes) — only applies to dev stacks
for (const [slotStr, data] of Object.entries(slotFiles)) {
const slot = slotStr.toString();
const key = `dev-${slot}`;
const stack = ensureStack(key, slot, 'dev');
// Enrich with branch/worktree from slot file if Docker labels are generic
if (stack.branch === 'unknown' && data.branch) {
stack.branch = data.branch;
}
if (stack.worktree === 'unknown' && data.worktree) {
stack.worktree = data.worktree;
}
stack.worktreePath = data.worktreePath || '';
// Add API and App as services (probe their ports)
const apiUp = await probePort(data.apiPort);
const appUp = await probePort(data.appPort);
// Only add if not already present from Docker
const hasApi = stack.services.some(s => s.name === 'api');
const hasApp = stack.services.some(s => s.name === 'app');
if (!hasApi) {
stack.services.unshift({
name: 'api',
type: 'local',
status: apiUp ? 'up' : 'down',
port: data.apiPort,
url: `http://localhost:${data.apiPort}`,
ports: [],
uptime: data.startedAt || '',
});
}
if (!hasApp) {
stack.services.unshift({
name: 'app',
type: 'local',
status: appUp ? 'up' : 'down',
port: data.appPort,
url: `http://localhost:${data.appPort}`,
ports: [],
uptime: data.startedAt || '',
});
}
// Add alerts and common-utils as local services (detected by log file existence)
const logsDir = data.logsDir || path.join(SLOTS_DIR, String(slot), 'logs');
const localOnlyServices = [
{ name: 'alerts', logFile: 'alerts.log' },
{ name: 'common-utils', logFile: 'common-utils.log' },
];
for (const { name, logFile } of localOnlyServices) {
if (!stack.services.some(s => s.name === name)) {
const logExists = fs.existsSync(path.join(logsDir, logFile));
stack.services.push({
name,
type: 'local',
status: logExists ? 'up' : 'down',
port: null,
url: null,
ports: [],
uptime: data.startedAt || '',
});
}
}
}
// Probe known local service ports for E2E and CI stacks.
// These are processes started by Playwright (E2E) or the Makefile (CI),
// not Docker containers, so they don't appear in the container list.
const ENV_LOCAL_SERVICES = {
e2e: [
{ name: 'e2e-runner', basePort: null }, // meta-service, detected by log file
{ name: 'api', basePort: 21000 },
{ name: 'app', basePort: 21300 },
],
int: [{ name: 'api', basePort: 19000 }],
};
for (const stack of Object.values(stackMap)) {
const localServices = ENV_LOCAL_SERVICES[stack.envType];
if (!localServices) continue;
for (const { name, basePort } of localServices) {
if (stack.services.some(s => s.name === name)) continue;
if (basePort) {
const port = basePort + stack.slot;
const up = await probePort(port);
stack.services.unshift({
name,
type: 'local',
status: up ? 'up' : 'down',
port,
url: `http://localhost:${port}`,
ports: [],
uptime: '',
});
} else {
// Meta-service (e.g. e2e-runner) — detect by log file existence
const logsDir = path.join(
SLOTS_DIR,
String(stack.slot),
`logs-${stack.envType}`,
);
const logFile = ENV_LOG_FILES[stack.envType]?.[name];
const logExists = logFile && fs.existsSync(path.join(logsDir, logFile));
stack.services.unshift({
name,
type: 'local',
status: logExists ? 'up' : 'down',
port: null,
url: null,
ports: [],
uptime: '',
});
}
}
}
// Sort: dev stacks first, then e2e, then int. Within each type sort by slot.
const envOrder = { dev: 0, e2e: 1, int: 2 };
const serviceOrder = [
'e2e-runner',
'app',
'api',
'alerts',
'common-utils',
'clickhouse',
'mongodb',
'otel-collector',
'otel-collector-json',
];
return Object.values(stackMap)
.sort((a, b) => {
const ea = envOrder[a.envType] ?? 9;
const eb = envOrder[b.envType] ?? 9;
return ea !== eb ? ea - eb : a.slot - b.slot;
})
.map(stack => ({
...stack,
services: stack.services.sort((a, b) => {
const ai = serviceOrder.indexOf(a.name);
const bi = serviceOrder.indexOf(b.name);
return (ai === -1 ? 999 : ai) - (bi === -1 ? 999 : bi);
}),
}));
}
// ---------------------------------------------------------------------------
// Log retrieval
// ---------------------------------------------------------------------------
// Map service names to log file names, keyed by envType.
// dev: each service has its own log file in <slot>/logs/
// e2e: single e2e.log captures the full Playwright run in <slot>/logs-e2e/
// int: api-int.log / common-utils-int.log / ci-int.log in <slot>/logs-int/
const ENV_LOG_FILES = {
dev: {
api: 'api.log',
app: 'app.log',
alerts: 'alerts.log',
'common-utils': 'common-utils.log',
},
e2e: {
'e2e-runner': 'e2e.log',
api: 'e2e.log', // API output is captured inside the Playwright log
app: 'e2e.log', // App output is captured inside the Playwright log
},
int: {
api: 'api-int.log',
'common-utils': 'common-utils-int.log',
},
};
// Backwards-compatible alias used only by dev stacks
const LOCAL_LOG_FILES = ENV_LOG_FILES.dev;
// Log subdirectory per envType
const ENV_LOG_DIRS = {
dev: 'logs',
e2e: 'logs-e2e',
int: 'logs-int',
};
// Map canonical service names to Docker Compose service names
const DOCKER_SERVICE_NAMES = {
clickhouse: 'ch-server',
mongodb: 'db',
'otel-collector': 'otel-collector',
'otel-collector-json': 'otel-collector-json',
};
// Map envType -> { project prefix, compose file relative to repo root }
const ENV_COMPOSE_CONFIG = {
dev: { prefix: 'hdx-dev-', composeFile: 'docker-compose.dev.yml' },
e2e: {
prefix: 'e2e-',
composeFile: 'packages/app/tests/e2e/docker-compose.yml',
},
int: { prefix: 'int-', composeFile: 'docker-compose.ci.yml' },
};
function getLocalLogs(slot, service, tail, envType = 'dev') {
const logFiles = ENV_LOG_FILES[envType] || ENV_LOG_FILES.dev;
const logFile = logFiles[service];
if (!logFile) return null;
const logSubdir = ENV_LOG_DIRS[envType] || 'logs';
const logPath = path.join(SLOTS_DIR, String(slot), logSubdir, logFile);
try {
if (!fs.existsSync(logPath)) return null;
const content = fs.readFileSync(logPath, 'utf-8');
if (tail > 0) {
const lines = content.split('\n');
return lines.slice(-tail).join('\n');
}
return content;
} catch {
return null;
}
}
function getDockerLogs(slot, service, tail, envType = 'dev') {
const composeService = DOCKER_SERVICE_NAMES[service];
if (!composeService) return null;
const config = ENV_COMPOSE_CONFIG[envType] || ENV_COMPOSE_CONFIG.dev;
const project = `${config.prefix}${slot}`;
try {
const logs = execSync(
`docker compose -p "${project}" -f "${config.composeFile}" logs --no-color --tail ${tail} "${composeService}"`,
{ encoding: 'utf-8', timeout: 5000, cwd: process.cwd() },
);
return logs;
} catch {
// Fallback: find container by project + compose service and use docker logs
try {
const containerId = execSync(
`docker ps -q --filter "label=com.docker.compose.project=${project}" --filter "label=com.docker.compose.service=${composeService}"`,
{ encoding: 'utf-8', timeout: 3000 },
).trim();
if (!containerId) return null;
return execSync(`docker logs --tail ${tail} "${containerId}"`, {
encoding: 'utf-8',
timeout: 5000,
});
} catch {
return null;
}
}
}
function getLogs(slot, service, tail = 100, envType = 'dev') {
// Try local log file first (all env types may have log files now)
const local = getLocalLogs(slot, service, tail, envType);
if (local !== null) return local;
return getDockerLogs(slot, service, tail, envType) || '';
}
/**
* Stream logs via Server-Sent Events (SSE).
* For Docker: spawns `docker logs --follow`.
* For local: tails the log file with periodic polling.
*/
function streamLogs(slot, service, req, res, envType = 'dev') {
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
Connection: 'keep-alive',
'Access-Control-Allow-Origin': '*',
});
const sendEvent = data => {
// SSE format: each line of data prefixed with "data: "
const lines = data.split('\n');
for (const line of lines) {
res.write(`data: ${line}\n`);
}
res.write('\n');
};
// Try Docker streaming first
const composeService = DOCKER_SERVICE_NAMES[service];
if (composeService) {
const config = ENV_COMPOSE_CONFIG[envType] || ENV_COMPOSE_CONFIG.dev;
const project = `${config.prefix}${slot}`;
const child = spawn(
'docker',
[
'compose',
'-p',
project,
'-f',
config.composeFile,
'logs',
'--no-color',
'--follow',
'--tail',
'50',
composeService,
],
{ cwd: process.cwd() },
);
child.stdout.on('data', chunk => sendEvent(chunk.toString()));
child.stderr.on('data', chunk => sendEvent(chunk.toString()));
child.on('close', () => {
res.write('event: close\ndata: stream ended\n\n');
res.end();
});
req.on('close', () => child.kill());
return;
}
// For local services: poll the log file
const logFiles = ENV_LOG_FILES[envType] || ENV_LOG_FILES.dev;
const logFile = logFiles[service];
if (logFile) {
const logSubdir = ENV_LOG_DIRS[envType] || 'logs';
const logPath = path.join(SLOTS_DIR, String(slot), logSubdir, logFile);
let lastSize = 0;
// Send initial tail
try {
if (fs.existsSync(logPath)) {
const stat = fs.statSync(logPath);
// Read last 8KB for initial payload
const readStart = Math.max(0, stat.size - 8192);
const fd = fs.openSync(logPath, 'r');
const buf = Buffer.alloc(stat.size - readStart);
fs.readSync(fd, buf, 0, buf.length, readStart);
fs.closeSync(fd);
sendEvent(buf.toString('utf-8'));
lastSize = stat.size;
}
} catch {
// file may not exist yet
}
// Poll for new content
const interval = setInterval(() => {
try {
if (!fs.existsSync(logPath)) return;
const stat = fs.statSync(logPath);
if (stat.size > lastSize) {
const fd = fs.openSync(logPath, 'r');
const buf = Buffer.alloc(stat.size - lastSize);
fs.readSync(fd, buf, 0, buf.length, lastSize);
fs.closeSync(fd);
sendEvent(buf.toString('utf-8'));
lastSize = stat.size;
}
} catch {
// ignore read errors
}
}, 1000);
req.on('close', () => clearInterval(interval));
return;
}
// Unknown service
sendEvent('Unknown service: ' + service);
res.end();
}
// ---------------------------------------------------------------------------
// Log history — archived runs stored in <slot>/history/<envType>-<ISO ts>/
// ---------------------------------------------------------------------------
const HISTORY_DIR_RE = /^(dev|e2e|int)-(\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}Z)$/;
function discoverHistory() {
const results = [];
try {
if (!fs.existsSync(SLOTS_DIR)) return results;
// Compute this portal's own slot so we know when process.cwd() is a
// valid fallback (only for the slot that matches this worktree).
let localSlot = null;
try {
const cwd = process.cwd();
const base = path.basename(cwd);
const cksum = [...base].reduce((s, c) => s + c.charCodeAt(0), 0);
localSlot = cksum % 100;
} catch {
// ignore
}
for (const slotEntry of fs.readdirSync(SLOTS_DIR)) {
const histDir = path.join(SLOTS_DIR, slotEntry, 'history');
if (!fs.existsSync(histDir) || !fs.statSync(histDir).isDirectory())
continue;
const slot = parseInt(slotEntry, 10);
if (isNaN(slot)) continue;
// Resolve slot-level worktree/branch from the JSON file (if still alive)
let slotWorktree = null;
let slotBranch = null;
const slotFile = path.join(SLOTS_DIR, `${slot}.json`);
try {
if (fs.existsSync(slotFile)) {
const data = JSON.parse(fs.readFileSync(slotFile, 'utf-8'));
slotWorktree = data.worktree || null;
slotBranch = data.branch || null;
if (!slotWorktree && data.worktreePath) {
slotWorktree = path.basename(data.worktreePath);
}
}
} catch {
// ignore
}
// Collect entries for this slot, reading meta.json where available
const slotEntries = [];
let metaWorktree = null;
let metaBranch = null;
for (const runDir of fs.readdirSync(histDir)) {
const match = runDir.match(HISTORY_DIR_RE);
if (!match) continue;
const runPath = path.join(histDir, runDir);
if (!fs.statSync(runPath).isDirectory()) continue;
const files = fs.readdirSync(runPath).filter(f => f.endsWith('.log'));
if (files.length === 0) continue;
let totalSize = 0;
for (const f of files) {
try {
totalSize += fs.statSync(path.join(runPath, f)).size;
} catch {
// ignore
}
}
// Read per-run meta.json
let runWorktree = null;
let runBranch = null;
const metaPath = path.join(runPath, 'meta.json');
try {
if (fs.existsSync(metaPath)) {
const meta = JSON.parse(fs.readFileSync(metaPath, 'utf-8'));
runWorktree = meta.worktree || null;
runBranch = meta.branch || null;
// Remember the first valid meta as a fallback for siblings
if (runWorktree && !metaWorktree) {
metaWorktree = runWorktree;
metaBranch = runBranch;
}
}
} catch {
// ignore parse errors
}
slotEntries.push({
slot,
envType: match[1],
timestamp: match[2],
dir: runDir,
files,
totalSize,
worktree: runWorktree,
branch: runBranch,
});
}
// For entries without meta.json, resolve the worktree using this
// priority: 1) sibling meta.json, 2) slot JSON file, 3) process.cwd()
// (only if this is the local slot).
let fallbackWorktree = metaWorktree || slotWorktree || null;
let fallbackBranch = metaBranch || slotBranch || null;
if (!fallbackWorktree && slot === localSlot) {
const cwd = process.cwd();
const repoRoot = resolveGitRoot(cwd);
if (repoRoot) {
fallbackWorktree = path.basename(repoRoot);
fallbackBranch = resolveGitBranch(cwd, new Map());
}
}
for (const entry of slotEntries) {
if (!entry.worktree) {
entry.worktree = fallbackWorktree || `slot-${slot}`;
}
if (!entry.branch) {
entry.branch = fallbackBranch || 'unknown';
}
results.push(entry);
}
}
} catch {
// slots dir doesn't exist or isn't readable
}
// Sort newest first
results.sort((a, b) => (b.timestamp > a.timestamp ? 1 : -1));
return results;
}
function getHistoryLog(slot, dir, file) {
// Validate directory name to prevent path traversal
if (!HISTORY_DIR_RE.test(dir)) return null;
if (file.includes('/') || file.includes('..')) return null;
const logPath = path.join(SLOTS_DIR, String(slot), 'history', dir, file);
try {
if (!fs.existsSync(logPath)) return null;
return fs.readFileSync(logPath, 'utf-8');
} catch {
return null;
}
}
function deleteHistoryEntry(slot, dir) {
if (!HISTORY_DIR_RE.test(dir)) return false;
const dirPath = path.join(SLOTS_DIR, String(slot), 'history', dir);
try {
if (!fs.existsSync(dirPath)) return false;
fs.rmSync(dirPath, { recursive: true, force: true });
// Clean up empty parent directories
const histDir = path.join(SLOTS_DIR, String(slot), 'history');
try {
if (fs.existsSync(histDir) && fs.readdirSync(histDir).length === 0) {
fs.rmdirSync(histDir);
}
} catch {
// ignore
}
return true;
} catch {
return false;
}
}
// ---------------------------------------------------------------------------
// HTML template
// ---------------------------------------------------------------------------
function renderDashboardHtml() {
return fs.readFileSync(path.join(__dirname, 'index.html'), 'utf-8');
}
// ---------------------------------------------------------------------------
// HTTP server
// ---------------------------------------------------------------------------
const server = http.createServer(async (req, res) => {
const parsed = url.parse(req.url, true);
const pathname = parsed.pathname;
if (pathname === '/api/stacks') {
const data = await buildDashboardData();
res.writeHead(200, {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
});
res.end(JSON.stringify(data));
} else if (pathname.match(/^\/api\/logs\/([a-z][a-z0-9]*)\/(\d+)\/(.+)$/)) {
// New route: /api/logs/:envType/:slot/:service
const match = pathname.match(
/^\/api\/logs\/([a-z][a-z0-9]*)\/(\d+)\/(.+)$/,
);
const envType = match[1];
const slot = match[2];
const service = decodeURIComponent(match[3]);
const tail = parseInt(parsed.query.tail || '200', 10);
if (parsed.query.stream === '1') {
streamLogs(slot, service, req, res, envType);
} else {
const logs = getLogs(slot, service, tail, envType);
res.writeHead(200, {
'Content-Type': 'text/plain',
'Access-Control-Allow-Origin': '*',
});
res.end(logs);
}
} else if (pathname.match(/^\/api\/logs\/(\d+)\/(.+)$/)) {
// Legacy route: /api/logs/:slot/:service (assumes dev)
const match = pathname.match(/^\/api\/logs\/(\d+)\/(.+)$/);
const slot = match[1];
const service = decodeURIComponent(match[2]);
const tail = parseInt(parsed.query.tail || '200', 10);
if (parsed.query.stream === '1') {
streamLogs(slot, service, req, res, 'dev');
} else {
const logs = getLogs(slot, service, tail, 'dev');
res.writeHead(200, {
'Content-Type': 'text/plain',
'Access-Control-Allow-Origin': '*',
});
res.end(logs);
}
} else if (pathname === '/api/history' && req.method === 'GET') {
const data = discoverHistory();
res.writeHead(200, {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
});
res.end(JSON.stringify(data));
} else if (
pathname.match(/^\/api\/history\/(\d+)\/([^/]+)\/(.+)$/) &&
req.method === 'GET'
) {
const match = pathname.match(/^\/api\/history\/(\d+)\/([^/]+)\/(.+)$/);
const slot = match[1];
const dir = decodeURIComponent(match[2]);
const file = decodeURIComponent(match[3]);
const content = getHistoryLog(slot, dir, file);
if (content !== null) {
res.writeHead(200, {
'Content-Type': 'text/plain',
'Access-Control-Allow-Origin': '*',
});
res.end(content);
} else {
res.writeHead(404, { 'Access-Control-Allow-Origin': '*' });
res.end('Not found');
}
} else if (
pathname.match(/^\/api\/history\/(\d+)\/([^/]+)$/) &&
req.method === 'DELETE'
) {
const match = pathname.match(/^\/api\/history\/(\d+)\/([^/]+)$/);
const slot = match[1];
const dir = decodeURIComponent(match[2]);
const ok = deleteHistoryEntry(slot, dir);
res.writeHead(ok ? 200 : 404, {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
});
res.end(JSON.stringify({ ok }));
} else if (pathname === '/styles.css') {
res.writeHead(200, { 'Content-Type': 'text/css' });
res.end(fs.readFileSync(path.join(__dirname, 'styles.css'), 'utf-8'));
} else if (pathname === '/' || pathname === '/index.html') {
res.writeHead(200, { 'Content-Type': 'text/html' });
res.end(renderDashboardHtml());
} else {
res.writeHead(404);
res.end('Not found');
}
});
server.listen(PORT, () => {
console.log(`\n HyperDX Dev Portal running at http://localhost:${PORT}\n`);
console.log(
' Discovering dev stacks via Docker labels + ~/.config/hyperdx/dev-slots/',
);
console.log(' Press Ctrl+C to stop\n');
});

View file

@ -0,0 +1,731 @@
:root {
/* HyperDX dark mode palette — from _tokens.scss + mantineTheme.ts */
--bg: #101113; /* dark-9: color-bg-body */
--card-bg: #1a1b1e; /* dark-7: color-bg-muted */
--card-surface: #25262b; /* dark-6: color-bg-field */
--border: #2c2e33; /* dark-5: color-border */
--border-emphasis: #373a40; /* dark-4: color-border-emphasis */
--text: #c1c2c5; /* dark-0: color-text */
--text-muted: #909296; /* dark-2: color-text-secondary */
--accent: #25e2a5; /* green-4: color-bg-brand / color-text-brand */
--accent-hover: #a0fad5; /* green-2 */
--green: #25e2a5; /* green-4: color-text-success */
--red: #ff725c; /* chart-error */
--yellow: #efb118; /* chart-warning */
--orange: #db6d28;
--log-bg: #141517; /* dark-8 */
--hover: #25262b; /* dark-6: color-bg-hover */
}
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial,
sans-serif;
background: var(--bg);
color: var(--text);
min-height: 100vh;
}
.layout {
display: flex;
height: 100vh;
}
.main-panel {
flex: 1;
overflow-y: auto;
padding: 24px;
min-width: 0;
}
.log-panel {
width: 0;
background: var(--bg);
border-left: 1px solid var(--border);
display: flex;
flex-direction: column;
transition: width 0.2s ease;
overflow: hidden;
}
.log-panel.open {
width: 50%;
min-width: 400px;
}
.log-panel-header {
padding: 12px 16px;
border-bottom: 1px solid var(--border);
display: flex;
align-items: center;
justify-content: space-between;
flex-shrink: 0;
}
.log-panel-header h3 {
font-size: 14px;
font-weight: 600;
display: flex;
align-items: center;
gap: 8px;
}
.log-panel-header .slot-label {
font-size: 11px;
color: var(--text-muted);
font-weight: 400;
}
.log-panel-actions {
display: flex;
gap: 8px;
align-items: center;
}
.log-panel-actions button {
background: none;
border: 1px solid var(--border);
color: var(--text-muted);
padding: 4px 10px;
border-radius: 4px;
font-size: 12px;
cursor: pointer;
transition: all 0.15s;
}
.log-panel-actions button:hover {
color: var(--text);
border-color: var(--text-muted);
}
.log-panel-actions .close-btn {
font-size: 18px;
line-height: 1;
padding: 2px 6px;
border: none;
}
.log-content {
flex: 1;
overflow-y: auto;
padding: 12px 16px;
font-family: 'SF Mono', 'Fira Code', 'Cascadia Code', monospace;
font-size: 12px;
line-height: 1.5;
white-space: pre-wrap;
word-break: break-all;
background: var(--bg);
color: #c9d1d9;
}
.log-content .log-line {
padding: 0 4px;
}
.log-streaming-badge {
display: inline-flex;
align-items: center;
gap: 4px;
font-size: 11px;
color: var(--green);
}
.log-streaming-badge .stream-dot {
width: 6px;
height: 6px;
border-radius: 50%;
background: var(--green);
animation: pulse 1.5s infinite;
}
.header {
display: flex;
align-items: center;
justify-content: space-between;
margin-bottom: 24px;
padding-bottom: 16px;
border-bottom: 1px solid var(--border);
}
.header h1 {
font-size: 24px;
font-weight: 600;
display: flex;
align-items: center;
gap: 10px;
}
.header .logo {
width: 28px;
height: 28px;
flex-shrink: 0;
}
.header .status {
font-size: 13px;
color: var(--text-muted);
display: flex;
align-items: center;
gap: 6px;
}
.header .status .dot {
width: 8px;
height: 8px;
border-radius: 50%;
background: var(--green);
animation: pulse 2s infinite;
}
@keyframes pulse {
0%,
100% {
opacity: 1;
}
50% {
opacity: 0.5;
}
}
.empty-state {
text-align: center;
padding: 80px 20px;
color: var(--text-muted);
}
.empty-state h2 {
font-size: 20px;
margin-bottom: 12px;
color: var(--text);
}
.empty-state code {
background: var(--card-bg);
padding: 4px 10px;
border-radius: 6px;
border: 1px solid var(--border);
font-size: 14px;
}
.stacks {
display: grid;
gap: 16px;
grid-template-columns: 1fr;
}
.stack-card {
background: var(--card-bg);
border: 1px solid var(--border);
border-radius: 12px;
overflow: hidden;
}
.stack-header {
padding: 16px 20px;
border-bottom: 1px solid var(--border);
display: flex;
align-items: center;
justify-content: space-between;
}
.env-badge {
font-size: 10px;
font-weight: 700;
padding: 2px 7px;
border-radius: 10px;
text-transform: uppercase;
letter-spacing: 0.04em;
}
.env-badge.dev {
background: rgba(37, 226, 165, 0.15);
color: var(--accent);
}
.env-badge.e2e {
background: rgba(88, 166, 255, 0.15);
color: #58a6ff;
}
.env-badge.int {
background: rgba(239, 177, 24, 0.15);
color: var(--yellow);
}
.services-table tr.env-separator td {
padding: 8px 20px 4px;
border-bottom: none;
background: var(--card-surface);
}
.services-table tr.env-separator + tr td {
border-top: none;
}
.stack-header .branch {
font-size: 15px;
font-weight: 600;
font-family: 'SF Mono', 'Fira Code', monospace;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
.stack-header .worktree {
font-size: 12px;
color: var(--text-muted);
margin-top: 2px;
}
.stack-header .stack-actions {
display: flex;
gap: 8px;
}
.stack-header .open-btn {
background: var(--accent);
color: var(--bg);
border: none;
padding: 6px 14px;
border-radius: 6px;
font-size: 13px;
font-weight: 600;
cursor: pointer;
text-decoration: none;
transition: opacity 0.15s;
}
.stack-header .open-btn:hover {
opacity: 0.85;
}
.services-table {
width: 100%;
border-collapse: collapse;
}
.services-table th {
text-align: left;
padding: 8px 20px;
font-size: 11px;
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.05em;
color: var(--text-muted);
border-bottom: 1px solid var(--border);
}
.services-table td {
padding: 10px 20px;
font-size: 14px;
border-bottom: 1px solid rgba(48, 54, 61, 0.5);
}
.services-table tr:last-child td {
border-bottom: none;
}
.services-table tr.clickable {
cursor: pointer;
transition: background 0.1s;
}
.services-table tr.clickable:hover {
background: rgba(88, 166, 255, 0.05);
}
.services-table tr.active {
background: rgba(88, 166, 255, 0.1);
}
.service-name {
font-weight: 500;
display: flex;
align-items: center;
gap: 8px;
}
.service-name .type-badge {
font-size: 10px;
padding: 1px 5px;
border-radius: 4px;
font-weight: 600;
text-transform: uppercase;
}
.type-badge.docker {
background: rgba(37, 226, 165, 0.12);
color: var(--accent);
}
.type-badge.local {
background: rgba(239, 177, 24, 0.12);
color: var(--yellow);
}
.log-btn {
background: none;
border: 1px solid var(--border);
color: var(--text-muted);
padding: 2px 8px;
border-radius: 4px;
font-size: 11px;
cursor: pointer;
transition: all 0.15s;
}
.log-btn:hover {
color: var(--text);
border-color: var(--text-muted);
}
.status-indicator {
display: flex;
align-items: center;
gap: 6px;
font-size: 13px;
}
.status-dot {
width: 8px;
height: 8px;
border-radius: 50%;
}
.status-dot.up {
background: var(--green);
}
.status-dot.down {
background: var(--red);
}
.port-link {
color: var(--accent);
text-decoration: none;
font-family: 'SF Mono', 'Fira Code', monospace;
font-size: 13px;
}
.port-link:hover {
text-decoration: underline;
}
.port-plain {
color: var(--text-muted);
font-family: 'SF Mono', 'Fira Code', monospace;
font-size: 13px;
}
.error-banner {
background: rgba(248, 81, 73, 0.1);
border: 1px solid var(--red);
color: var(--red);
padding: 12px 16px;
border-radius: 8px;
margin-bottom: 16px;
font-size: 14px;
display: none;
}
/* --- Tabs --- */
.tab-bar {
display: flex;
gap: 0;
margin-bottom: 20px;
border-bottom: 1px solid var(--border);
}
.tab {
background: none;
border: none;
border-bottom: 2px solid transparent;
color: var(--text-muted);
padding: 8px 16px;
font-size: 14px;
font-weight: 500;
cursor: pointer;
transition: all 0.15s;
}
.tab:hover {
color: var(--text);
}
.tab.active {
color: var(--accent);
border-bottom-color: var(--accent);
}
/* --- History --- */
.history-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 16px;
}
.history-header h2 {
font-size: 18px;
font-weight: 600;
}
.clear-all-btn {
background: none;
border: 1px solid var(--red);
color: var(--red);
padding: 5px 12px;
border-radius: 6px;
font-size: 12px;
font-weight: 500;
cursor: pointer;
transition: all 0.15s;
}
.clear-all-btn:hover {
background: rgba(248, 81, 73, 0.1);
}
.history-list {
display: flex;
flex-direction: column;
gap: 8px;
}
.history-toggle-btn {
background: none;
border: none;
color: var(--text-muted);
cursor: pointer;
font-size: 18px;
line-height: 1;
padding: 2px 6px;
border-radius: 4px;
transition: all 0.15s;
user-select: none;
}
.history-toggle-btn:hover {
color: var(--text);
background: rgba(255, 255, 255, 0.05);
}
.history-card-body {
overflow: hidden;
transition: max-height 0.2s ease;
}
.history-card-body.collapsed {
max-height: 0 !important;
}
.history-entry {
border-top: 1px solid var(--border);
}
.history-entry:first-child {
border-top: none;
}
.history-entry-header {
display: flex;
align-items: center;
justify-content: space-between;
padding: 12px 20px;
background: var(--card-surface);
border-bottom: 1px solid rgba(48, 54, 61, 0.5);
}
.history-entry-header .history-meta {
display: flex;
align-items: center;
gap: 10px;
}
.history-time {
font-size: 13px;
color: var(--text);
font-weight: 500;
}
.history-ts {
font-size: 11px;
color: var(--text-muted);
font-family: 'SF Mono', 'Fira Code', monospace;
}
.history-delete-btn {
background: none;
border: 1px solid var(--border);
color: var(--text-muted);
padding: 3px 10px;
border-radius: 4px;
font-size: 11px;
cursor: pointer;
transition: all 0.15s;
}
.history-delete-btn:hover {
color: var(--red);
border-color: var(--red);
}
.file-list {
padding: 4px 0;
}
.file-item {
display: flex;
align-items: center;
justify-content: space-between;
padding: 8px 20px;
cursor: pointer;
transition: background 0.1s;
font-size: 13px;
}
.file-item:hover {
background: rgba(88, 166, 255, 0.05);
}
.file-item.active {
background: rgba(88, 166, 255, 0.1);
}
.file-name {
font-family: 'SF Mono', 'Fira Code', monospace;
color: var(--text);
}
.file-size {
font-size: 11px;
color: var(--text-muted);
font-family: 'SF Mono', 'Fira Code', monospace;
}
/* --- History search --- */
.history-search {
position: relative;
margin-bottom: 16px;
}
.history-search input {
width: 100%;
padding: 9px 14px 9px 36px;
background: var(--card-bg);
border: 1px solid var(--border);
border-radius: 8px;
color: var(--text);
font-size: 14px;
font-family: inherit;
outline: none;
transition: border-color 0.15s;
box-sizing: border-box;
}
.history-search input:focus {
border-color: var(--accent);
}
.history-search input::placeholder {
color: var(--text-muted);
}
.history-search-icon {
position: absolute;
left: 12px;
top: 50%;
transform: translateY(-50%);
color: var(--text-muted);
font-size: 14px;
pointer-events: none;
}
.search-match {
background: rgba(37, 226, 165, 0.2);
border-radius: 2px;
}
/* --- Confirm modal --- */
.modal-overlay {
position: fixed;
inset: 0;
background: rgba(0, 0, 0, 0.6);
display: flex;
align-items: center;
justify-content: center;
z-index: 100;
opacity: 0;
transition: opacity 0.15s;
}
.modal-overlay.visible {
opacity: 1;
}
.modal-box {
background: var(--card-bg);
border: 1px solid var(--border-emphasis);
border-radius: 12px;
padding: 24px;
max-width: 380px;
width: 90%;
transform: scale(0.95);
transition: transform 0.15s;
}
.modal-overlay.visible .modal-box {
transform: scale(1);
}
.modal-box h3 {
font-size: 16px;
font-weight: 600;
margin-bottom: 8px;
}
.modal-box p {
font-size: 14px;
color: var(--text-muted);
margin-bottom: 20px;
line-height: 1.5;
}
.modal-actions {
display: flex;
justify-content: flex-end;
gap: 8px;
}
.modal-actions button {
padding: 7px 16px;
border-radius: 6px;
font-size: 13px;
font-weight: 500;
cursor: pointer;
transition: all 0.15s;
}
.modal-cancel {
background: none;
border: 1px solid var(--border);
color: var(--text);
}
.modal-cancel:hover {
border-color: var(--text-muted);
}
.modal-danger {
background: var(--red);
border: 1px solid var(--red);
color: #fff;
}
.modal-danger:hover {
opacity: 0.85;
}

25
scripts/ensure-dev-portal.sh Executable file
View file

@ -0,0 +1,25 @@
#!/usr/bin/env bash
# ---------------------------------------------------------------------------
# Ensure the HyperDX Dev Portal is running on port 9900 (or HDX_PORTAL_PORT).
#
# Works when sourced OR executed directly:
# source scripts/ensure-dev-portal.sh # sourced from another bash script
# bash scripts/ensure-dev-portal.sh # executed from Makefile
#
# When sourced, HDX_PORTAL_PID is set in the caller's environment so the
# caller can kill the portal on exit if desired.
# ---------------------------------------------------------------------------
HDX_PORTAL_PORT="${HDX_PORTAL_PORT:-9900}"
HDX_PORTAL_PID=""
if ! (echo >/dev/tcp/127.0.0.1/"$HDX_PORTAL_PORT") 2>/dev/null; then
# Resolve script directory — works for both source and direct execution
_script_dir="$(cd "$(dirname "${BASH_SOURCE[0]:-$0}")" && pwd)"
_portal_script="${_script_dir}/dev-portal/server.js"
if [ -f "$_portal_script" ]; then
HDX_PORTAL_PORT="$HDX_PORTAL_PORT" node "$_portal_script" >/dev/null 2>&1 &
HDX_PORTAL_PID=$!
fi
unset _script_dir _portal_script
fi

View file

@ -34,29 +34,42 @@ DOCKER_COMPOSE_FILE="$REPO_ROOT/packages/app/tests/e2e/docker-compose.yml"
# so that multiple worktrees can run E2E tests in parallel without port
# conflicts. Override HDX_E2E_SLOT manually if you need a specific slot.
#
# Port mapping (base + slot) — shares the same base ports as dev-int
# since they never run simultaneously. All ports are below the OS
# ephemeral range (49152) to avoid OrbStack/Docker conflicts:
# OpAMP : 14320 + slot (14320-14419) shared with dev-int
# ClickHouse HTTP : 18123 + slot (18123-18222) shared with dev-int
# ClickHouse Native: 18223 + slot (18223-18322) e2e only
# API server : 19000 + slot (19000-19099) shared with dev-int
# MongoDB : 39999 + slot (39999-40098) shared with dev-int
# App (local) : 48001 + slot (48001-48100) e2e only
# App (fullstack) : 48081 + slot (48081-48180) e2e only
# Port allocation — E2E gets its own range (20320-21399) so it can run
# simultaneously with CI integration tests (14320-40098) and the dev
# stack (30100-31199). All ports are below the OS ephemeral range
# (32768 Linux, 49152 macOS).
#
# Port mapping (base + slot):
# OpAMP : 20320 + slot (20320-20419)
# ClickHouse HTTP : 20500 + slot (20500-20599)
# ClickHouse Native: 20600 + slot (20600-20699)
# API server : 21000 + slot (21000-21099)
# MongoDB : 21100 + slot (21100-21199)
# App (local) : 21200 + slot (21200-21299)
# App (fullstack) : 21300 + slot (21300-21399)
# ---------------------------------------------------------------------------
export HDX_E2E_SLOT="${HDX_E2E_SLOT:-$(printf '%s' "$(basename "$REPO_ROOT")" | cksum | awk '{print $1 % 100}')}"
export HDX_E2E_OPAMP_PORT="${HDX_E2E_OPAMP_PORT:-$((14320 + HDX_E2E_SLOT))}"
export HDX_E2E_CH_PORT="${HDX_E2E_CH_PORT:-$((18123 + HDX_E2E_SLOT))}"
export HDX_E2E_CH_NATIVE_PORT="${HDX_E2E_CH_NATIVE_PORT:-$((18223 + HDX_E2E_SLOT))}"
export HDX_E2E_API_PORT="${HDX_E2E_API_PORT:-$((19000 + HDX_E2E_SLOT))}"
export HDX_E2E_MONGO_PORT="${HDX_E2E_MONGO_PORT:-$((39999 + HDX_E2E_SLOT))}"
export HDX_E2E_APP_LOCAL_PORT="${HDX_E2E_APP_LOCAL_PORT:-$((48001 + HDX_E2E_SLOT))}"
export HDX_E2E_APP_PORT="${HDX_E2E_APP_PORT:-$((48081 + HDX_E2E_SLOT))}"
export HDX_E2E_OPAMP_PORT="${HDX_E2E_OPAMP_PORT:-$((20320 + HDX_E2E_SLOT))}"
export HDX_E2E_CH_PORT="${HDX_E2E_CH_PORT:-$((20500 + HDX_E2E_SLOT))}"
export HDX_E2E_CH_NATIVE_PORT="${HDX_E2E_CH_NATIVE_PORT:-$((20600 + HDX_E2E_SLOT))}"
export HDX_E2E_API_PORT="${HDX_E2E_API_PORT:-$((21000 + HDX_E2E_SLOT))}"
export HDX_E2E_MONGO_PORT="${HDX_E2E_MONGO_PORT:-$((21100 + HDX_E2E_SLOT))}"
export HDX_E2E_APP_LOCAL_PORT="${HDX_E2E_APP_LOCAL_PORT:-$((21200 + HDX_E2E_SLOT))}"
export HDX_E2E_APP_PORT="${HDX_E2E_APP_PORT:-$((21300 + HDX_E2E_SLOT))}"
export E2E_PROJECT="e2e-${HDX_E2E_SLOT}"
# --- Log capture for dev-portal visibility ---
HDX_E2E_SLOTS_DIR="${HOME}/.config/hyperdx/dev-slots"
HDX_E2E_LOGS_DIR="${HDX_E2E_SLOTS_DIR}/${HDX_E2E_SLOT}/logs-e2e"
mkdir -p "$HDX_E2E_LOGS_DIR"
exec > >(tee "$HDX_E2E_LOGS_DIR/e2e.log") 2>&1
# --- Start dev portal in background if not already running ---
# shellcheck source=./ensure-dev-portal.sh
source "${REPO_ROOT}/scripts/ensure-dev-portal.sh"
echo "Using E2E slot ${HDX_E2E_SLOT} (project=${E2E_PROJECT} ch=${HDX_E2E_CH_PORT} ch-native=${HDX_E2E_CH_NATIVE_PORT} mongo=${HDX_E2E_MONGO_PORT} api=${HDX_E2E_API_PORT} app=${HDX_E2E_APP_PORT} app-local=${HDX_E2E_APP_LOCAL_PORT} opamp=${HDX_E2E_OPAMP_PORT})"
# Configuration constants
@ -92,6 +105,19 @@ done
cleanup_services() {
echo "Stopping E2E services and removing volumes..."
docker compose -p "$E2E_PROJECT" -f "$DOCKER_COMPOSE_FILE" down -v
# Archive logs to history instead of deleting
if [ -d "$HDX_E2E_LOGS_DIR" ] && [ -n "$(ls -A "$HDX_E2E_LOGS_DIR" 2>/dev/null)" ]; then
_ts=$(date -u +%Y-%m-%dT%H:%M:%SZ)
_hist="${HDX_E2E_SLOTS_DIR}/${HDX_E2E_SLOT}/history/e2e-${_ts}"
mkdir -p "$_hist"
mv "$HDX_E2E_LOGS_DIR"/* "$_hist/" 2>/dev/null || true
_wt=$(basename "$(git -C "$REPO_ROOT" rev-parse --show-toplevel 2>/dev/null || echo "$REPO_ROOT")")
_br=$(git -C "$REPO_ROOT" rev-parse --abbrev-ref HEAD 2>/dev/null || echo "unknown")
cat > "$_hist/meta.json" <<METAEOF
{"worktree":"${_wt}","branch":"${_br}","worktreePath":"${REPO_ROOT}"}
METAEOF
fi
rm -rf "$HDX_E2E_LOGS_DIR" 2>/dev/null || true
}
check_mongodb_health() {
@ -211,6 +237,9 @@ run_tests() {
# Set up cleanup trap
setup_cleanup_trap
# Clean up E2E Next.js build directory to avoid stale lock/cache issues
rm -rf "$REPO_ROOT/packages/app/.next-e2e" 2>/dev/null || true
# Always start and seed ClickHouse (shared by both modes)
setup_clickhouse