mirror of
https://github.com/hyperdxio/hyperdx
synced 2026-04-21 13:37:15 +00:00
## Summary
- **Replace QEMU-emulated multi-platform builds with native ARM64 runners** for both `release.yml` and `release-nightly.yml`, significantly speeding up CI build times
- Each architecture (amd64/arm64) now builds in parallel on native hardware, then a manifest-merge job combines them into a multi-arch Docker tag using `docker buildx imagetools create`
- Migrate from raw Makefile `docker buildx build` commands to `docker/build-push-action@v6` for better GHA integration
## Changes
### `.github/workflows/release.yml`
- Removed QEMU setup entirely
- Replaced single `release` matrix job with per-image build+publish job pairs:
- `build-otel-collector` / `publish-otel-collector` (runners: `ubuntu-latest` / `ubuntu-latest-arm64`)
- `build-app` / `publish-app` (runners: `Large-Runner-x64-32` / `Large-Runner-ARM64-32`)
- `build-local` / `publish-local` (runners: `Large-Runner-x64-32` / `Large-Runner-ARM64-32`)
- `build-all-in-one` / `publish-all-in-one` (runners: `Large-Runner-x64-32` / `Large-Runner-ARM64-32`)
- Added `check_version` job to centralize skip-if-exists logic (replaces per-image `docker manifest inspect` in Makefile)
- Removed `check_release_app_pushed` artifact upload/download — `publish-app` now outputs `app_was_pushed` directly
- Scoped GHA build cache per image+arch (e.g. `scope=app-amd64`) to avoid collisions
- All 4 images build in parallel (8 build jobs total), then 4 manifest-merge jobs, then downstream notifications
### `.github/workflows/release-nightly.yml`
- Same native runner pattern (no skip logic since nightly always rebuilds)
- 8 build + 4 publish jobs running in parallel
- Slack failure notification and OTel trace export now depend on publish jobs
### `Makefile`
- Removed `release-*` and `release-*-nightly` targets (lines 203-361) — build logic moved into workflow YAML
- Local `build-*` targets preserved for developer use
## Architecture
Follows the same pattern as `release-ee.yml` in the EE repo:
```
check_changesets → check_version
│
┌───────────────────┼───────────────────┬───────────────────┐
v v v v
build-app(x2) build-otel(x2) build-local(x2) build-aio(x2)
│ │ │ │
publish-app publish-otel publish-local publish-aio
│ │ │ │
└─────────┬─────────┴───────────────────┴───────────────────┘
v
notify_helm_charts / notify_clickhouse_clickstack
│
otel-cicd-action
```
## Notes
- `--squash` flag dropped — it's an experimental Docker feature incompatible with `build-push-action` in multi-platform mode. `sbom` and `provenance` are preserved via action params.
- Per-arch intermediate tags (e.g. `hyperdx/hyperdx:2.21.0-amd64`) remain visible on DockerHub — this is standard practice.
- Dual DockerHub namespace tagging (`hyperdx/*` + `clickhouse/clickstack-*`) preserved.
## Sample Run
https://github.com/hyperdxio/hyperdx/actions/runs/23362835749
202 lines
6.8 KiB
Makefile
202 lines
6.8 KiB
Makefile
LATEST_VERSION := $$(sed -n 's/.*"version": "\([^"]*\)".*/\1/p' package.json)
|
|
BUILD_PLATFORMS = linux/arm64,linux/amd64
|
|
|
|
include .env
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# Multi-agent / worktree isolation
|
|
# ---------------------------------------------------------------------------
|
|
# Compute a deterministic port offset (0-99) from the working directory name
|
|
# so that multiple worktrees can run integration tests in parallel without
|
|
# port conflicts. Override HDX_CI_SLOT manually if you need a specific slot.
|
|
#
|
|
# Port mapping (base + slot):
|
|
# ClickHouse HTTP : 18123 + slot
|
|
# MongoDB : 39999 + slot
|
|
# API test server : 19000 + slot
|
|
# OpAMP : 14320 + slot
|
|
# ---------------------------------------------------------------------------
|
|
HDX_CI_SLOT ?= $(shell printf '%s' "$(notdir $(CURDIR))" | cksum | awk '{print $$1 % 100}')
|
|
HDX_CI_PROJECT := int-$(HDX_CI_SLOT)
|
|
HDX_CI_CH_PORT := $(shell echo $$((18123 + $(HDX_CI_SLOT))))
|
|
HDX_CI_MONGO_PORT:= $(shell echo $$((39999 + $(HDX_CI_SLOT))))
|
|
HDX_CI_API_PORT := $(shell echo $$((19000 + $(HDX_CI_SLOT))))
|
|
HDX_CI_OPAMP_PORT:= $(shell echo $$((14320 + $(HDX_CI_SLOT))))
|
|
|
|
export HDX_CI_CH_PORT HDX_CI_MONGO_PORT HDX_CI_API_PORT HDX_CI_OPAMP_PORT
|
|
|
|
.PHONY: all
|
|
all: install-tools
|
|
|
|
.PHONY: install-tools
|
|
install-tools:
|
|
yarn setup
|
|
@echo "All tools installed"
|
|
|
|
.PHONY: dev-build
|
|
dev-build:
|
|
docker compose -f docker-compose.dev.yml build
|
|
|
|
.PHONY: dev-up
|
|
dev-up:
|
|
npm run dev
|
|
|
|
.PHONY: dev-down
|
|
dev-down:
|
|
docker compose -f docker-compose.dev.yml down
|
|
|
|
.PHONY: dev-lint
|
|
dev-lint:
|
|
npx nx run-many -t lint:fix
|
|
|
|
.PHONY: ci-build
|
|
ci-build:
|
|
npx nx run-many -t ci:build
|
|
|
|
.PHONY: ci-lint
|
|
ci-lint:
|
|
npx nx run-many -t ci:lint
|
|
|
|
.PHONY: dev-int-build
|
|
dev-int-build:
|
|
npx nx run-many -t ci:build
|
|
docker compose -p $(HDX_CI_PROJECT) -f ./docker-compose.ci.yml build
|
|
|
|
.PHONY: dev-int
|
|
dev-int:
|
|
@echo "Using CI slot $(HDX_CI_SLOT) (project=$(HDX_CI_PROJECT) ch=$(HDX_CI_CH_PORT) mongo=$(HDX_CI_MONGO_PORT) api=$(HDX_CI_API_PORT))"
|
|
docker compose -p $(HDX_CI_PROJECT) -f ./docker-compose.ci.yml up -d
|
|
npx nx run @hyperdx/api:dev:int $(FILE); ret=$$?; \
|
|
docker compose -p $(HDX_CI_PROJECT) -f ./docker-compose.ci.yml down; \
|
|
exit $$ret
|
|
|
|
.PHONY: dev-int-common-utils
|
|
dev-int-common-utils:
|
|
@echo "Using CI slot $(HDX_CI_SLOT) (project=$(HDX_CI_PROJECT) ch=$(HDX_CI_CH_PORT) mongo=$(HDX_CI_MONGO_PORT))"
|
|
docker compose -p $(HDX_CI_PROJECT) -f ./docker-compose.ci.yml up -d
|
|
npx nx run @hyperdx/common-utils:dev:int $(FILE)
|
|
docker compose -p $(HDX_CI_PROJECT) -f ./docker-compose.ci.yml down
|
|
|
|
.PHONY: ci-int
|
|
ci-int:
|
|
docker compose -p $(HDX_CI_PROJECT) -f ./docker-compose.ci.yml up -d --quiet-pull
|
|
npx nx run-many -t ci:int --parallel=false
|
|
docker compose -p $(HDX_CI_PROJECT) -f ./docker-compose.ci.yml down
|
|
|
|
.PHONY: dev-unit
|
|
dev-unit:
|
|
npx nx run-many -t dev:unit
|
|
|
|
.PHONY: ci-unit
|
|
ci-unit:
|
|
npx nx run-many -t ci:unit
|
|
|
|
.PHONY: e2e
|
|
e2e:
|
|
# Run full-stack by default (MongoDB + API + local Docker ClickHouse)
|
|
# For more control (--ui, --last-failed, --headed, etc), call the script directly:
|
|
# ./scripts/test-e2e.sh --ui --last-failed
|
|
./scripts/test-e2e.sh
|
|
|
|
|
|
|
|
# TODO: check db connections before running the migration CLIs
|
|
.PHONY: dev-migrate-db
|
|
dev-migrate-db:
|
|
@echo "Migrating Mongo db...\n"
|
|
npx nx run @hyperdx/api:dev:migrate-db
|
|
@echo "Migrating ClickHouse db...\n"
|
|
npx nx run @hyperdx/api:dev:migrate-ch
|
|
|
|
.PHONY: version
|
|
version:
|
|
sh ./version.sh
|
|
|
|
# Build targets (local builds only)
|
|
|
|
.PHONY: build-otel-collector
|
|
build-otel-collector:
|
|
docker build . -f docker/otel-collector/Dockerfile \
|
|
-t ${OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} \
|
|
-t ${NEXT_OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} \
|
|
--target prod
|
|
|
|
.PHONY: build-local
|
|
build-local:
|
|
docker build . -f ./docker/hyperdx/Dockerfile \
|
|
--build-context clickhouse=./docker/clickhouse \
|
|
--build-context otel-collector=./docker/otel-collector \
|
|
--build-context hyperdx=./docker/hyperdx \
|
|
--build-context api=./packages/api \
|
|
--build-context app=./packages/app \
|
|
--build-arg CODE_VERSION=${CODE_VERSION} \
|
|
-t ${LOCAL_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} \
|
|
-t ${NEXT_LOCAL_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} \
|
|
--target all-in-one-noauth
|
|
|
|
.PHONY: build-all-in-one
|
|
build-all-in-one:
|
|
docker build . -f ./docker/hyperdx/Dockerfile \
|
|
--build-context clickhouse=./docker/clickhouse \
|
|
--build-context otel-collector=./docker/otel-collector \
|
|
--build-context hyperdx=./docker/hyperdx \
|
|
--build-context api=./packages/api \
|
|
--build-context app=./packages/app \
|
|
--build-arg CODE_VERSION=${CODE_VERSION} \
|
|
-t ${ALL_IN_ONE_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} \
|
|
-t ${NEXT_ALL_IN_ONE_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} \
|
|
--target all-in-one-auth
|
|
|
|
.PHONY: build-app
|
|
build-app:
|
|
docker build . -f ./docker/hyperdx/Dockerfile \
|
|
--build-context hyperdx=./docker/hyperdx \
|
|
--build-context api=./packages/api \
|
|
--build-context app=./packages/app \
|
|
--build-arg CODE_VERSION=${CODE_VERSION} \
|
|
-t ${IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} \
|
|
--target prod
|
|
|
|
.PHONY: build-otel-collector-nightly
|
|
build-otel-collector-nightly:
|
|
docker build . -f docker/otel-collector/Dockerfile \
|
|
-t ${OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB}:${IMAGE_NIGHTLY_TAG} \
|
|
-t ${NEXT_OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB}:${IMAGE_NIGHTLY_TAG} \
|
|
--target prod
|
|
|
|
.PHONY: build-app-nightly
|
|
build-app-nightly:
|
|
docker build . -f ./docker/hyperdx/Dockerfile \
|
|
--build-context hyperdx=./docker/hyperdx \
|
|
--build-context api=./packages/api \
|
|
--build-context app=./packages/app \
|
|
--build-arg CODE_VERSION=${CODE_VERSION} \
|
|
-t ${IMAGE_NAME_DOCKERHUB}:${IMAGE_NIGHTLY_TAG} \
|
|
--target prod
|
|
|
|
.PHONY: build-local-nightly
|
|
build-local-nightly:
|
|
docker build . -f ./docker/hyperdx/Dockerfile \
|
|
--build-context clickhouse=./docker/clickhouse \
|
|
--build-context otel-collector=./docker/otel-collector \
|
|
--build-context hyperdx=./docker/hyperdx \
|
|
--build-context api=./packages/api \
|
|
--build-context app=./packages/app \
|
|
--build-arg CODE_VERSION=${CODE_VERSION} \
|
|
-t ${LOCAL_IMAGE_NAME_DOCKERHUB}:${IMAGE_NIGHTLY_TAG} \
|
|
-t ${NEXT_LOCAL_IMAGE_NAME_DOCKERHUB}:${IMAGE_NIGHTLY_TAG} \
|
|
--target all-in-one-noauth
|
|
|
|
.PHONY: build-all-in-one-nightly
|
|
build-all-in-one-nightly:
|
|
docker build . -f ./docker/hyperdx/Dockerfile \
|
|
--build-context clickhouse=./docker/clickhouse \
|
|
--build-context otel-collector=./docker/otel-collector \
|
|
--build-context hyperdx=./docker/hyperdx \
|
|
--build-context api=./packages/api \
|
|
--build-context app=./packages/app \
|
|
--build-arg CODE_VERSION=${CODE_VERSION} \
|
|
-t ${ALL_IN_ONE_IMAGE_NAME_DOCKERHUB}:${IMAGE_NIGHTLY_TAG} \
|
|
-t ${NEXT_ALL_IN_ONE_IMAGE_NAME_DOCKERHUB}:${IMAGE_NIGHTLY_TAG} \
|
|
--target all-in-one-auth
|
|
|