ci: Replace QEMU with native ARM64 runners for release builds (#1952)

## Summary

- **Replace QEMU-emulated multi-platform builds with native ARM64 runners** for both `release.yml` and `release-nightly.yml`, significantly speeding up CI build times
- Each architecture (amd64/arm64) now builds in parallel on native hardware, then a manifest-merge job combines them into a multi-arch Docker tag using `docker buildx imagetools create`
- Migrate from raw Makefile `docker buildx build` commands to `docker/build-push-action@v6` for better GHA integration

## Changes

### `.github/workflows/release.yml`
- Removed QEMU setup entirely
- Replaced single `release` matrix job with per-image build+publish job pairs:
  - `build-otel-collector` / `publish-otel-collector` (runners: `ubuntu-latest` / `ubuntu-latest-arm64`)
  - `build-app` / `publish-app` (runners: `Large-Runner-x64-32` / `Large-Runner-ARM64-32`)
  - `build-local` / `publish-local` (runners: `Large-Runner-x64-32` / `Large-Runner-ARM64-32`)
  - `build-all-in-one` / `publish-all-in-one` (runners: `Large-Runner-x64-32` / `Large-Runner-ARM64-32`)
- Added `check_version` job to centralize skip-if-exists logic (replaces per-image `docker manifest inspect` in Makefile)
- Removed `check_release_app_pushed` artifact upload/download — `publish-app` now outputs `app_was_pushed` directly
- Scoped GHA build cache per image+arch (e.g. `scope=app-amd64`) to avoid collisions
- All 4 images build in parallel (8 build jobs total), then 4 manifest-merge jobs, then downstream notifications

### `.github/workflows/release-nightly.yml`
- Same native runner pattern (no skip logic since nightly always rebuilds)
- 8 build + 4 publish jobs running in parallel
- Slack failure notification and OTel trace export now depend on publish jobs

### `Makefile`
- Removed `release-*` and `release-*-nightly` targets (lines 203-361) — build logic moved into workflow YAML
- Local `build-*` targets preserved for developer use

## Architecture

Follows the same pattern as `release-ee.yml` in the EE repo:

```
check_changesets → check_version
                        │
    ┌───────────────────┼───────────────────┬───────────────────┐
    v                   v                   v                   v
build-app(x2)   build-otel(x2)    build-local(x2)    build-aio(x2)
    │                   │                   │                   │
publish-app      publish-otel       publish-local      publish-aio
    │                   │                   │                   │
    └─────────┬─────────┴───────────────────┴───────────────────┘
              v
     notify_helm_charts / notify_clickhouse_clickstack
              │
     otel-cicd-action
```

## Notes

- `--squash` flag dropped — it's an experimental Docker feature incompatible with `build-push-action` in multi-platform mode. `sbom` and `provenance` are preserved via action params.
- Per-arch intermediate tags (e.g. `hyperdx/hyperdx:2.21.0-amd64`) remain visible on DockerHub — this is standard practice.
- Dual DockerHub namespace tagging (`hyperdx/*` + `clickhouse/clickstack-*`) preserved.


## Sample Run
https://github.com/hyperdxio/hyperdx/actions/runs/23362835749
This commit is contained in:
Warren Lee 2026-03-20 16:04:49 -07:00 committed by GitHub
parent 5d2ebc46ee
commit 470b2c2992
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
7 changed files with 784 additions and 338 deletions

View file

@ -0,0 +1,7 @@
---
"@hyperdx/api": patch
"@hyperdx/app": patch
"@hyperdx/otel-collector": patch
---
ci: Replace QEMU with native ARM64 runners for release builds

View file

@ -13,38 +13,26 @@ permissions:
pull-requests: write
actions: read
jobs:
release:
name: Release
runs-on: ubuntu-24.04
# ---------------------------------------------------------------------------
# OTel Collector Nightly
# ---------------------------------------------------------------------------
build-otel-collector-nightly:
name: Build OTel Collector Nightly (${{ matrix.arch }})
strategy:
fail-fast: true
matrix:
release:
- release-all-in-one-nightly
- release-app-nightly
- release-local-nightly
- release-otel-collector-nightly
include:
- arch: amd64
platform: linux/amd64
runner: ubuntu-latest
- arch: arm64
platform: linux/arm64
runner: ubuntu-latest-arm64
runs-on: ${{ matrix.runner }}
steps:
- name: Free Disk Space (Ubuntu)
uses: jlumbroso/free-disk-space@main
with:
# this might remove tools that are actually needed,
# if set to "true" but frees about 6 GB
tool-cache: false
docker-images: false
# all of these default to true, but feel free to set to
# "false" if necessary for your workflow
android: true
dotnet: true
haskell: true
large-packages: true
swap-storage: true
- name: Checkout
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
@ -59,17 +47,309 @@ jobs:
password: ${{ secrets.GITHUB_TOKEN }}
- name: Load Environment Variables from .env
uses: xom9ikk/dotenv@v2
- name: Publish Images
run: make ${{ matrix.release }}
- name: Build and Push
uses: docker/build-push-action@v6
with:
context: .
file: ./docker/otel-collector/Dockerfile
platforms: ${{ matrix.platform }}
target: prod
tags: |
${{ env.OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB }}:${{ env.IMAGE_NIGHTLY_TAG }}-${{ matrix.arch }}
${{ env.NEXT_OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB }}:${{ env.IMAGE_NIGHTLY_TAG }}-${{ matrix.arch }}
push: true
cache-from: type=gha,scope=otel-collector-nightly-${{ matrix.arch }}
cache-to:
type=gha,mode=max,scope=otel-collector-nightly-${{ matrix.arch }}
publish-otel-collector-nightly:
name: Publish OTel Collector Nightly Manifest
needs: build-otel-collector-nightly
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Load Environment Variables from .env
uses: xom9ikk/dotenv@v2
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Create multi-arch manifests
run: |
TAG="${{ env.IMAGE_NIGHTLY_TAG }}"
for IMAGE in "${{ env.OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB }}" "${{ env.NEXT_OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB }}"; do
docker buildx imagetools create \
-t "${IMAGE}:${TAG}" \
"${IMAGE}:${TAG}-amd64" \
"${IMAGE}:${TAG}-arm64"
done
# ---------------------------------------------------------------------------
# App Nightly
# ---------------------------------------------------------------------------
build-app-nightly:
name: Build App Nightly (${{ matrix.arch }})
strategy:
fail-fast: true
matrix:
include:
- arch: amd64
platform: linux/amd64
runner: Large-Runner-x64-32
- arch: arm64
platform: linux/arm64
runner: Large-Runner-ARM64-32
runs-on: ${{ matrix.runner }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Load Environment Variables from .env
uses: xom9ikk/dotenv@v2
- name: Build and Push
uses: docker/build-push-action@v6
with:
file: ./docker/hyperdx/Dockerfile
platforms: ${{ matrix.platform }}
target: prod
build-contexts: |
hyperdx=./docker/hyperdx
api=./packages/api
app=./packages/app
build-args: |
CODE_VERSION=${{ env.IMAGE_NIGHTLY_TAG }}
tags: |
${{ env.IMAGE_NAME_DOCKERHUB }}:${{ env.IMAGE_NIGHTLY_TAG }}-${{ matrix.arch }}
push: true
sbom: true
provenance: true
cache-from: type=gha,scope=app-nightly-${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=app-nightly-${{ matrix.arch }}
publish-app-nightly:
name: Publish App Nightly Manifest
needs: build-app-nightly
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Load Environment Variables from .env
uses: xom9ikk/dotenv@v2
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Create multi-arch manifest
run: |
TAG="${{ env.IMAGE_NIGHTLY_TAG }}"
IMAGE="${{ env.IMAGE_NAME_DOCKERHUB }}"
docker buildx imagetools create \
-t "${IMAGE}:${TAG}" \
"${IMAGE}:${TAG}-amd64" \
"${IMAGE}:${TAG}-arm64"
# ---------------------------------------------------------------------------
# Local Nightly (all-in-one-noauth)
# ---------------------------------------------------------------------------
build-local-nightly:
name: Build Local Nightly (${{ matrix.arch }})
strategy:
fail-fast: true
matrix:
include:
- arch: amd64
platform: linux/amd64
runner: Large-Runner-x64-32
- arch: arm64
platform: linux/arm64
runner: Large-Runner-ARM64-32
runs-on: ${{ matrix.runner }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Load Environment Variables from .env
uses: xom9ikk/dotenv@v2
- name: Build and Push
uses: docker/build-push-action@v6
with:
file: ./docker/hyperdx/Dockerfile
platforms: ${{ matrix.platform }}
target: all-in-one-noauth
build-contexts: |
clickhouse=./docker/clickhouse
otel-collector=./docker/otel-collector
hyperdx=./docker/hyperdx
api=./packages/api
app=./packages/app
build-args: |
CODE_VERSION=${{ env.IMAGE_NIGHTLY_TAG }}
tags: |
${{ env.LOCAL_IMAGE_NAME_DOCKERHUB }}:${{ env.IMAGE_NIGHTLY_TAG }}-${{ matrix.arch }}
${{ env.NEXT_LOCAL_IMAGE_NAME_DOCKERHUB }}:${{ env.IMAGE_NIGHTLY_TAG }}-${{ matrix.arch }}
push: true
sbom: true
provenance: true
cache-from: type=gha,scope=local-nightly-${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=local-nightly-${{ matrix.arch }}
publish-local-nightly:
name: Publish Local Nightly Manifest
needs: build-local-nightly
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Load Environment Variables from .env
uses: xom9ikk/dotenv@v2
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Create multi-arch manifests
run: |
TAG="${{ env.IMAGE_NIGHTLY_TAG }}"
for IMAGE in "${{ env.LOCAL_IMAGE_NAME_DOCKERHUB }}" "${{ env.NEXT_LOCAL_IMAGE_NAME_DOCKERHUB }}"; do
docker buildx imagetools create \
-t "${IMAGE}:${TAG}" \
"${IMAGE}:${TAG}-amd64" \
"${IMAGE}:${TAG}-arm64"
done
# ---------------------------------------------------------------------------
# All-in-One Nightly (all-in-one-auth)
# ---------------------------------------------------------------------------
build-all-in-one-nightly:
name: Build All-in-One Nightly (${{ matrix.arch }})
strategy:
fail-fast: true
matrix:
include:
- arch: amd64
platform: linux/amd64
runner: Large-Runner-x64-32
- arch: arm64
platform: linux/arm64
runner: Large-Runner-ARM64-32
runs-on: ${{ matrix.runner }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Load Environment Variables from .env
uses: xom9ikk/dotenv@v2
- name: Build and Push
uses: docker/build-push-action@v6
with:
file: ./docker/hyperdx/Dockerfile
platforms: ${{ matrix.platform }}
target: all-in-one-auth
build-contexts: |
clickhouse=./docker/clickhouse
otel-collector=./docker/otel-collector
hyperdx=./docker/hyperdx
api=./packages/api
app=./packages/app
build-args: |
CODE_VERSION=${{ env.IMAGE_NIGHTLY_TAG }}
tags: |
${{ env.ALL_IN_ONE_IMAGE_NAME_DOCKERHUB }}:${{ env.IMAGE_NIGHTLY_TAG }}-${{ matrix.arch }}
${{ env.NEXT_ALL_IN_ONE_IMAGE_NAME_DOCKERHUB }}:${{ env.IMAGE_NIGHTLY_TAG }}-${{ matrix.arch }}
push: true
sbom: true
provenance: true
cache-from: type=gha,scope=all-in-one-nightly-${{ matrix.arch }}
cache-to:
type=gha,mode=max,scope=all-in-one-nightly-${{ matrix.arch }}
publish-all-in-one-nightly:
name: Publish All-in-One Nightly Manifest
needs: build-all-in-one-nightly
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Load Environment Variables from .env
uses: xom9ikk/dotenv@v2
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Create multi-arch manifests
run: |
TAG="${{ env.IMAGE_NIGHTLY_TAG }}"
for IMAGE in "${{ env.ALL_IN_ONE_IMAGE_NAME_DOCKERHUB }}" "${{ env.NEXT_ALL_IN_ONE_IMAGE_NAME_DOCKERHUB }}"; do
docker buildx imagetools create \
-t "${IMAGE}:${TAG}" \
"${IMAGE}:${TAG}-amd64" \
"${IMAGE}:${TAG}-arm64"
done
# ---------------------------------------------------------------------------
# Failure notification + OTel
# ---------------------------------------------------------------------------
slack-notify-failure:
needs: release
needs:
[
publish-otel-collector-nightly,
publish-app-nightly,
publish-local-nightly,
publish-all-in-one-nightly,
]
runs-on: ubuntu-24.04
if: failure() && always()
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Get failed jobs
id: get_failed_jobs
uses: actions/github-script@v7
@ -88,7 +368,6 @@ jobs:
.join(', ');
core.setOutput('failed_jobs', failedJobs);
- name: Slack Notification
uses: 8398a7/action-slack@v3
with:
@ -96,7 +375,7 @@ jobs:
fields: repo,workflow,commit,author
custom_payload: |
{
"text": "Release Nightly Failed 😔",
"text": "Release Nightly Failed",
"attachments": [{
"color": "danger",
"fields": [
@ -122,48 +401,18 @@ jobs:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_ENG_NOTIFS }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# notify_downstream:
# name: Notify Downstream
# needs: [publish_common_utils, release]
# runs-on: ubuntu-24.04
# if:
# needs.publish_common_utils.outputs.changeset_outputs_hasChangesets ==
# 'false'
# steps:
# - name: Checkout
# uses: actions/checkout@v4
# - name: Load Environment Variables from .env
# uses: xom9ikk/dotenv@v2
# - name: Get Downstream App Installation Token
# id: auth
# uses: actions/create-github-app-token@v2
# with:
# app-id: ${{ secrets.DOWNSTREAM_CH_APP_ID }}
# private-key: ${{ secrets.DOWNSTREAM_CH_APP_PRIVATE_KEY }}
# owner: ${{ secrets.DOWNSTREAM_CH_OWNER }}
# - name: Notify Downstream
# uses: actions/github-script@v7
# env:
# TAG: ${{ env.IMAGE_VERSION }}${{ env.IMAGE_VERSION_SUB_TAG }}
# with:
# github-token: ${{ steps.auth.outputs.token }}
# script: |
# const { TAG } = process.env;
# const result = await github.rest.actions.createWorkflowDispatch({
# owner: '${{ secrets.DOWNSTREAM_CH_OWNER }}',
# repo: '${{ secrets.DOWNSTREAM_DP_REPO }}',
# workflow_id: '${{ secrets.DOWNSTREAM_DP_WORKFLOW_ID }}',
# ref: 'main',
# inputs: {
# tag: TAG
# }
# });
otel-cicd-action:
if: always()
name: OpenTelemetry Export Trace
runs-on: ubuntu-latest
needs: [release, slack-notify-failure]
needs:
[
publish-otel-collector-nightly,
publish-app-nightly,
publish-local-nightly,
publish-all-in-one-nightly,
slack-notify-failure,
]
steps:
- name: Export workflow
uses: corentinmusard/otel-cicd-action@v4

View file

@ -17,10 +17,6 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Setup node
uses: actions/setup-node@v4
with:
@ -43,42 +39,62 @@ jobs:
YARN_ENABLE_IMMUTABLE_INSTALLS: false
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
release:
name: Release
# ---------------------------------------------------------------------------
# Check if version already published (skip-if-exists)
# ---------------------------------------------------------------------------
check_version:
name: Check if version exists
needs: check_changesets
runs-on: ubuntu-24.04
concurrency:
group:
${{ github.workflow }}-release-${{ matrix.release }}-${{ github.ref }}
cancel-in-progress: false
strategy:
matrix:
release:
- release-all-in-one
- release-app
- release-local
- release-otel-collector
if:
needs.check_changesets.outputs.changeset_outputs_hasChangesets == 'false'
outputs:
should_release: ${{ steps.check.outputs.should_release }}
steps:
- name: Free Disk Space (Ubuntu)
uses: jlumbroso/free-disk-space@main
with:
# this might remove tools that are actually needed,
# if set to "true" but frees about 6 GB
tool-cache: false
docker-images: false
# all of these default to true, but feel free to set to
# "false" if necessary for your workflow
android: true
dotnet: true
haskell: true
large-packages: true
swap-storage: true
- name: Checkout
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
- name: Load Environment Variables from .env
uses: xom9ikk/dotenv@v2
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Check if app image tag already exists
id: check
run: |
TAG_EXISTS=$(docker manifest inspect ${{ env.IMAGE_NAME_DOCKERHUB }}:${{ env.IMAGE_VERSION }}${{ env.IMAGE_VERSION_SUB_TAG }} > /dev/null 2>&1 && echo "true" || echo "false")
if [ "$TAG_EXISTS" = "true" ]; then
echo "Tag ${{ env.IMAGE_NAME_DOCKERHUB }}:${{ env.IMAGE_VERSION }}${{ env.IMAGE_VERSION_SUB_TAG }} already exists. Skipping release."
echo "should_release=false" >> $GITHUB_OUTPUT
else
echo "Tag does not exist. Proceeding with release."
echo "should_release=true" >> $GITHUB_OUTPUT
fi
# ---------------------------------------------------------------------------
# OTel Collector build each arch natively, then merge into multi-arch tag
# ---------------------------------------------------------------------------
build-otel-collector:
name: Build OTel Collector (${{ matrix.arch }})
needs: [check_changesets, check_version]
if: needs.check_version.outputs.should_release == 'true'
strategy:
fail-fast: true
matrix:
include:
- arch: amd64
platform: linux/amd64
runner: ubuntu-latest
- arch: arm64
platform: linux/arm64
runner: ubuntu-latest-arm64
runs-on: ${{ matrix.runner }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
@ -93,68 +109,332 @@ jobs:
password: ${{ secrets.GITHUB_TOKEN }}
- name: Load Environment Variables from .env
uses: xom9ikk/dotenv@v2
- name: Publish Images
id: publish
if:
needs.check_changesets.outputs.changeset_outputs_hasChangesets ==
'false'
run: |
OUTPUT=$(make ${{ matrix.release }} 2>&1)
echo "$OUTPUT"
# Store the output in a file for the specific release target
echo "$OUTPUT" > /tmp/${{ matrix.release }}-output.txt
# Upload the output as an artifact if this is release-app
if [ "${{ matrix.release }}" = "release-app" ]; then
if echo "$OUTPUT" | grep -q "already exists. Skipping push."; then
echo "RELEASE_APP_PUSHED=false" > /tmp/release-app-status.txt
else
echo "RELEASE_APP_PUSHED=true" > /tmp/release-app-status.txt
fi
fi
- name: Upload release-app status
if: matrix.release == 'release-app'
uses: actions/upload-artifact@v4
- name: Build and Push
uses: docker/build-push-action@v6
with:
name: release-app-status
path: /tmp/release-app-status.txt
check_release_app_pushed:
name: Check if release-app pushed
needs: [check_changesets, release]
runs-on: ubuntu-24.04
outputs:
app_was_pushed: ${{ steps.check.outputs.pushed }}
if:
needs.check_changesets.outputs.changeset_outputs_hasChangesets == 'false'
context: .
file: ./docker/otel-collector/Dockerfile
platforms: ${{ matrix.platform }}
target: prod
tags: |
${{ env.OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB }}:${{ env.IMAGE_VERSION }}${{ env.IMAGE_VERSION_SUB_TAG }}-${{ matrix.arch }}
${{ env.NEXT_OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB }}:${{ env.IMAGE_VERSION }}${{ env.IMAGE_VERSION_SUB_TAG }}-${{ matrix.arch }}
push: true
cache-from: type=gha,scope=otel-collector-${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=otel-collector-${{ matrix.arch }}
publish-otel-collector:
name: Publish OTel Collector Manifest
needs: [check_version, build-otel-collector]
runs-on: ubuntu-latest
steps:
- name: Download release-app status
uses: actions/download-artifact@v4
- name: Checkout
uses: actions/checkout@v4
- name: Load Environment Variables from .env
uses: xom9ikk/dotenv@v2
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
name: release-app-status
path: /tmp
- name: Check if release-app was pushed
id: check
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Create multi-arch manifests
run: |
if [ -f /tmp/release-app-status.txt ]; then
STATUS=$(cat /tmp/release-app-status.txt)
echo "Release app status: $STATUS"
if [ "$STATUS" = "RELEASE_APP_PUSHED=true" ]; then
echo "pushed=true" >> $GITHUB_OUTPUT
else
echo "pushed=false" >> $GITHUB_OUTPUT
fi
else
echo "No release-app status file found, assuming not pushed"
echo "pushed=false" >> $GITHUB_OUTPUT
fi
VERSION="${{ env.IMAGE_VERSION }}${{ env.IMAGE_VERSION_SUB_TAG }}"
MAJOR="${{ env.IMAGE_VERSION }}"
LATEST="${{ env.IMAGE_LATEST_TAG }}"
for IMAGE in "${{ env.OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB }}" "${{ env.NEXT_OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB }}"; do
docker buildx imagetools create \
-t "${IMAGE}:${VERSION}" \
-t "${IMAGE}:${MAJOR}" \
-t "${IMAGE}:${LATEST}" \
"${IMAGE}:${VERSION}-amd64" \
"${IMAGE}:${VERSION}-arm64"
done
# ---------------------------------------------------------------------------
# App (fullstack prod) build each arch natively, then merge
# ---------------------------------------------------------------------------
build-app:
name: Build App (${{ matrix.arch }})
needs: [check_changesets, check_version]
if: needs.check_version.outputs.should_release == 'true'
strategy:
fail-fast: true
matrix:
include:
- arch: amd64
platform: linux/amd64
runner: Large-Runner-x64-32
- arch: arm64
platform: linux/arm64
runner: Large-Runner-ARM64-32
runs-on: ${{ matrix.runner }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Load Environment Variables from .env
uses: xom9ikk/dotenv@v2
- name: Build and Push
uses: docker/build-push-action@v6
with:
file: ./docker/hyperdx/Dockerfile
platforms: ${{ matrix.platform }}
target: prod
build-contexts: |
hyperdx=./docker/hyperdx
api=./packages/api
app=./packages/app
build-args: |
CODE_VERSION=${{ env.CODE_VERSION }}
tags: |
${{ env.IMAGE_NAME_DOCKERHUB }}:${{ env.IMAGE_VERSION }}${{ env.IMAGE_VERSION_SUB_TAG }}-${{ matrix.arch }}
push: true
sbom: true
provenance: true
cache-from: type=gha,scope=app-${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=app-${{ matrix.arch }}
publish-app:
name: Publish App Manifest
needs: [check_version, build-app]
runs-on: ubuntu-latest
outputs:
app_was_pushed: 'true'
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Load Environment Variables from .env
uses: xom9ikk/dotenv@v2
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Create multi-arch manifest
run: |
VERSION="${{ env.IMAGE_VERSION }}${{ env.IMAGE_VERSION_SUB_TAG }}"
MAJOR="${{ env.IMAGE_VERSION }}"
LATEST="${{ env.IMAGE_LATEST_TAG }}"
IMAGE="${{ env.IMAGE_NAME_DOCKERHUB }}"
docker buildx imagetools create \
-t "${IMAGE}:${VERSION}" \
-t "${IMAGE}:${MAJOR}" \
-t "${IMAGE}:${LATEST}" \
"${IMAGE}:${VERSION}-amd64" \
"${IMAGE}:${VERSION}-arm64"
# ---------------------------------------------------------------------------
# Local (all-in-one-noauth) build each arch natively, then merge
# ---------------------------------------------------------------------------
build-local:
name: Build Local (${{ matrix.arch }})
needs: [check_changesets, check_version]
if: needs.check_version.outputs.should_release == 'true'
strategy:
fail-fast: true
matrix:
include:
- arch: amd64
platform: linux/amd64
runner: Large-Runner-x64-32
- arch: arm64
platform: linux/arm64
runner: Large-Runner-ARM64-32
runs-on: ${{ matrix.runner }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Load Environment Variables from .env
uses: xom9ikk/dotenv@v2
- name: Build and Push
uses: docker/build-push-action@v6
with:
file: ./docker/hyperdx/Dockerfile
platforms: ${{ matrix.platform }}
target: all-in-one-noauth
build-contexts: |
clickhouse=./docker/clickhouse
otel-collector=./docker/otel-collector
hyperdx=./docker/hyperdx
api=./packages/api
app=./packages/app
build-args: |
CODE_VERSION=${{ env.CODE_VERSION }}
tags: |
${{ env.LOCAL_IMAGE_NAME_DOCKERHUB }}:${{ env.IMAGE_VERSION }}${{ env.IMAGE_VERSION_SUB_TAG }}-${{ matrix.arch }}
${{ env.NEXT_LOCAL_IMAGE_NAME_DOCKERHUB }}:${{ env.IMAGE_VERSION }}${{ env.IMAGE_VERSION_SUB_TAG }}-${{ matrix.arch }}
push: true
sbom: true
provenance: true
cache-from: type=gha,scope=local-${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=local-${{ matrix.arch }}
publish-local:
name: Publish Local Manifest
needs: [check_version, build-local]
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Load Environment Variables from .env
uses: xom9ikk/dotenv@v2
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Create multi-arch manifests
run: |
VERSION="${{ env.IMAGE_VERSION }}${{ env.IMAGE_VERSION_SUB_TAG }}"
MAJOR="${{ env.IMAGE_VERSION }}"
LATEST="${{ env.IMAGE_LATEST_TAG }}"
for IMAGE in "${{ env.LOCAL_IMAGE_NAME_DOCKERHUB }}" "${{ env.NEXT_LOCAL_IMAGE_NAME_DOCKERHUB }}"; do
docker buildx imagetools create \
-t "${IMAGE}:${VERSION}" \
-t "${IMAGE}:${MAJOR}" \
-t "${IMAGE}:${LATEST}" \
"${IMAGE}:${VERSION}-amd64" \
"${IMAGE}:${VERSION}-arm64"
done
# ---------------------------------------------------------------------------
# All-in-One (all-in-one-auth) build each arch natively, then merge
# ---------------------------------------------------------------------------
build-all-in-one:
name: Build All-in-One (${{ matrix.arch }})
needs: [check_changesets, check_version]
if: needs.check_version.outputs.should_release == 'true'
strategy:
fail-fast: true
matrix:
include:
- arch: amd64
platform: linux/amd64
runner: Large-Runner-x64-32
- arch: arm64
platform: linux/arm64
runner: Large-Runner-ARM64-32
runs-on: ${{ matrix.runner }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Load Environment Variables from .env
uses: xom9ikk/dotenv@v2
- name: Build and Push
uses: docker/build-push-action@v6
with:
file: ./docker/hyperdx/Dockerfile
platforms: ${{ matrix.platform }}
target: all-in-one-auth
build-contexts: |
clickhouse=./docker/clickhouse
otel-collector=./docker/otel-collector
hyperdx=./docker/hyperdx
api=./packages/api
app=./packages/app
build-args: |
CODE_VERSION=${{ env.CODE_VERSION }}
tags: |
${{ env.ALL_IN_ONE_IMAGE_NAME_DOCKERHUB }}:${{ env.IMAGE_VERSION }}${{ env.IMAGE_VERSION_SUB_TAG }}-${{ matrix.arch }}
${{ env.NEXT_ALL_IN_ONE_IMAGE_NAME_DOCKERHUB }}:${{ env.IMAGE_VERSION }}${{ env.IMAGE_VERSION_SUB_TAG }}-${{ matrix.arch }}
push: true
sbom: true
provenance: true
cache-from: type=gha,scope=all-in-one-${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=all-in-one-${{ matrix.arch }}
publish-all-in-one:
name: Publish All-in-One Manifest
needs: [check_version, build-all-in-one]
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Load Environment Variables from .env
uses: xom9ikk/dotenv@v2
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Create multi-arch manifests
run: |
VERSION="${{ env.IMAGE_VERSION }}${{ env.IMAGE_VERSION_SUB_TAG }}"
MAJOR="${{ env.IMAGE_VERSION }}"
LATEST="${{ env.IMAGE_LATEST_TAG }}"
for IMAGE in "${{ env.ALL_IN_ONE_IMAGE_NAME_DOCKERHUB }}" "${{ env.NEXT_ALL_IN_ONE_IMAGE_NAME_DOCKERHUB }}"; do
docker buildx imagetools create \
-t "${IMAGE}:${VERSION}" \
-t "${IMAGE}:${MAJOR}" \
-t "${IMAGE}:${LATEST}" \
"${IMAGE}:${VERSION}-amd64" \
"${IMAGE}:${VERSION}-arm64"
done
# ---------------------------------------------------------------------------
# Downstream notifications
# ---------------------------------------------------------------------------
notify_helm_charts:
name: Notify Helm-Charts Downstream
needs: [check_changesets, release, check_release_app_pushed]
needs:
[
check_changesets,
publish-app,
publish-otel-collector,
publish-local,
publish-all-in-one,
]
runs-on: ubuntu-24.04
if: |
needs.check_changesets.outputs.changeset_outputs_hasChangesets == 'false' &&
needs.check_release_app_pushed.outputs.app_was_pushed == 'true'
needs.publish-app.outputs.app_was_pushed == 'true'
steps:
- name: Checkout
uses: actions/checkout@v4
@ -178,15 +458,23 @@ jobs:
tag: TAG
}
});
notify_ch:
name: Notify CH Downstream
needs: [check_changesets, release, check_release_app_pushed]
needs:
[
check_changesets,
publish-app,
publish-otel-collector,
publish-local,
publish-all-in-one,
]
runs-on: ubuntu-24.04
# Temporarily disabled:
if: false
# if: |
# needs.check_changesets.outputs.changeset_outputs_hasChangesets == 'false' &&
# needs.check_release_app_pushed.outputs.app_was_pushed == 'true'
# needs.publish-app.outputs.app_was_pushed == 'true'
steps:
- name: Checkout
uses: actions/checkout@v4
@ -219,8 +507,15 @@ jobs:
});
notify_clickhouse_clickstack:
needs: [check_changesets, release, check_release_app_pushed]
if: needs.check_release_app_pushed.outputs.app_was_pushed == 'true'
needs:
[
check_changesets,
publish-app,
publish-otel-collector,
publish-local,
publish-all-in-one,
]
if: needs.publish-app.outputs.app_was_pushed == 'true'
timeout-minutes: 5
runs-on: ubuntu-24.04
steps:
@ -254,8 +549,10 @@ jobs:
needs:
[
check_changesets,
release,
check_release_app_pushed,
publish-app,
publish-otel-collector,
publish-local,
publish-all-in-one,
notify_helm_charts,
notify_ch,
notify_clickhouse_clickstack,

159
Makefile
View file

@ -200,162 +200,3 @@ build-all-in-one-nightly:
-t ${NEXT_ALL_IN_ONE_IMAGE_NAME_DOCKERHUB}:${IMAGE_NIGHTLY_TAG} \
--target all-in-one-auth
# Release targets (with multi-platform build and push)
.PHONY: release-otel-collector
release-otel-collector:
@TAG_EXISTS=$$(docker manifest inspect ${OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} > /dev/null 2>&1 && echo "true" || echo "false"); \
if [ "$$TAG_EXISTS" = "true" ]; then \
echo "Tag ${OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} already exists. Skipping push."; \
else \
echo "Tag ${OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} does not exist. Building and pushing..."; \
docker buildx build --platform ${BUILD_PLATFORMS} . -f docker/otel-collector/Dockerfile \
-t ${OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} \
-t ${OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION} \
-t ${OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB}:${IMAGE_LATEST_TAG} \
-t ${NEXT_OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} \
-t ${NEXT_OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION} \
-t ${NEXT_OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB}:${IMAGE_LATEST_TAG} \
--target prod \
--push \
--cache-from=type=gha \
--cache-to=type=gha,mode=max; \
fi
.PHONY: release-local
release-local:
@TAG_EXISTS=$$(docker manifest inspect ${LOCAL_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} > /dev/null 2>&1 && echo "true" || echo "false"); \
if [ "$$TAG_EXISTS" = "true" ]; then \
echo "Tag ${LOCAL_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} already exists. Skipping push."; \
else \
echo "Tag ${LOCAL_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} does not exist. Building and pushing..."; \
docker buildx build --squash --sbom=true --provenance=true . -f ./docker/hyperdx/Dockerfile \
--build-context clickhouse=./docker/clickhouse \
--build-context otel-collector=./docker/otel-collector \
--build-context hyperdx=./docker/hyperdx \
--build-context api=./packages/api \
--build-context app=./packages/app \
--build-arg CODE_VERSION=${CODE_VERSION} \
--platform ${BUILD_PLATFORMS} \
-t ${LOCAL_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} \
-t ${LOCAL_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION} \
-t ${LOCAL_IMAGE_NAME_DOCKERHUB}:${IMAGE_LATEST_TAG} \
-t ${NEXT_LOCAL_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} \
-t ${NEXT_LOCAL_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION} \
-t ${NEXT_LOCAL_IMAGE_NAME_DOCKERHUB}:${IMAGE_LATEST_TAG} \
--target all-in-one-noauth \
--push \
--cache-from=type=gha \
--cache-to=type=gha,mode=max; \
fi
.PHONY: release-all-in-one
release-all-in-one:
@TAG_EXISTS=$$(docker manifest inspect ${ALL_IN_ONE_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} > /dev/null 2>&1 && echo "true" || echo "false"); \
if [ "$$TAG_EXISTS" = "true" ]; then \
echo "Tag ${ALL_IN_ONE_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} already exists. Skipping push."; \
else \
echo "Tag ${ALL_IN_ONE_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} does not exist. Building and pushing..."; \
docker buildx build --squash --sbom=true --provenance=true . -f ./docker/hyperdx/Dockerfile \
--build-context clickhouse=./docker/clickhouse \
--build-context otel-collector=./docker/otel-collector \
--build-context hyperdx=./docker/hyperdx \
--build-context api=./packages/api \
--build-context app=./packages/app \
--build-arg CODE_VERSION=${CODE_VERSION} \
--platform ${BUILD_PLATFORMS} \
-t ${ALL_IN_ONE_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} \
-t ${ALL_IN_ONE_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION} \
-t ${ALL_IN_ONE_IMAGE_NAME_DOCKERHUB}:${IMAGE_LATEST_TAG} \
-t ${NEXT_ALL_IN_ONE_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} \
-t ${NEXT_ALL_IN_ONE_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION} \
-t ${NEXT_ALL_IN_ONE_IMAGE_NAME_DOCKERHUB}:${IMAGE_LATEST_TAG} \
--target all-in-one-auth \
--push \
--cache-from=type=gha \
--cache-to=type=gha,mode=max; \
fi
.PHONY: release-app
release-app:
@TAG_EXISTS=$$(docker manifest inspect ${IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} > /dev/null 2>&1 && echo "true" || echo "false"); \
if [ "$$TAG_EXISTS" = "true" ]; then \
echo "Tag ${IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} already exists. Skipping push."; \
else \
echo "Tag ${IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} does not exist. Building and pushing..."; \
docker buildx build --squash --sbom=true --provenance=true . -f ./docker/hyperdx/Dockerfile \
--build-context hyperdx=./docker/hyperdx \
--build-context api=./packages/api \
--build-context app=./packages/app \
--build-arg CODE_VERSION=${CODE_VERSION} \
--platform ${BUILD_PLATFORMS} \
-t ${IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}${IMAGE_VERSION_SUB_TAG} \
-t ${IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION} \
-t ${IMAGE_NAME_DOCKERHUB}:${IMAGE_LATEST_TAG} \
--target prod \
--push \
--cache-from=type=gha \
--cache-to=type=gha,mode=max; \
fi
.PHONY: release-otel-collector-nightly
release-otel-collector-nightly:
@echo "Building and pushing nightly tag ${OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB}:${IMAGE_NIGHTLY_TAG}..."; \
docker buildx build --platform ${BUILD_PLATFORMS} . -f docker/otel-collector/Dockerfile \
-t ${OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB}:${IMAGE_NIGHTLY_TAG} \
-t ${NEXT_OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB}:${IMAGE_NIGHTLY_TAG} \
--target prod \
--push \
--cache-from=type=gha \
--cache-to=type=gha,mode=max
.PHONY: release-app-nightly
release-app-nightly:
@echo "Building and pushing nightly tag ${IMAGE_NAME_DOCKERHUB}:${IMAGE_NIGHTLY_TAG}..."; \
docker buildx build --squash --sbom=true --provenance=true . -f ./docker/hyperdx/Dockerfile \
--build-context hyperdx=./docker/hyperdx \
--build-context api=./packages/api \
--build-context app=./packages/app \
--build-arg CODE_VERSION=${IMAGE_NIGHTLY_TAG} \
--platform ${BUILD_PLATFORMS} \
-t ${IMAGE_NAME_DOCKERHUB}:${IMAGE_NIGHTLY_TAG} \
--target prod \
--push \
--cache-from=type=gha \
--cache-to=type=gha,mode=max
.PHONY: release-local-nightly
release-local-nightly:
@echo "Building and pushing nightly tag ${LOCAL_IMAGE_NAME_DOCKERHUB}:${IMAGE_NIGHTLY_TAG}..."; \
docker buildx build --squash --sbom=true --provenance=true . -f ./docker/hyperdx/Dockerfile \
--build-context clickhouse=./docker/clickhouse \
--build-context otel-collector=./docker/otel-collector \
--build-context hyperdx=./docker/hyperdx \
--build-context api=./packages/api \
--build-context app=./packages/app \
--build-arg CODE_VERSION=${IMAGE_NIGHTLY_TAG} \
--platform ${BUILD_PLATFORMS} \
-t ${LOCAL_IMAGE_NAME_DOCKERHUB}:${IMAGE_NIGHTLY_TAG} \
-t ${NEXT_LOCAL_IMAGE_NAME_DOCKERHUB}:${IMAGE_NIGHTLY_TAG} \
--target all-in-one-noauth \
--push \
--cache-from=type=gha \
--cache-to=type=gha,mode=max
.PHONY: release-all-in-one-nightly
release-all-in-one-nightly:
@echo "Building and pushing nightly tag ${ALL_IN_ONE_IMAGE_NAME_DOCKERHUB}:${IMAGE_NIGHTLY_TAG}..."; \
docker buildx build --squash --sbom=true --provenance=true . -f ./docker/hyperdx/Dockerfile \
--build-context clickhouse=./docker/clickhouse \
--build-context otel-collector=./docker/otel-collector \
--build-context hyperdx=./docker/hyperdx \
--build-context api=./packages/api \
--build-context app=./packages/app \
--build-arg CODE_VERSION=${IMAGE_NIGHTLY_TAG} \
--platform ${BUILD_PLATFORMS} \
-t ${ALL_IN_ONE_IMAGE_NAME_DOCKERHUB}:${IMAGE_NIGHTLY_TAG} \
-t ${NEXT_ALL_IN_ONE_IMAGE_NAME_DOCKERHUB}:${IMAGE_NIGHTLY_TAG} \
--target all-in-one-auth \
--push \
--cache-from=type=gha \
--cache-to=type=gha,mode=max

View file

@ -32,14 +32,27 @@ WORKDIR /app
COPY .yarn ./.yarn
COPY .yarnrc.yml yarn.lock package.json nx.json .prettierrc .prettierignore ./tsconfig.base.json ./
COPY ./packages/common-utils ./packages/common-utils
# Only copy package.json for workspace resolution during yarn install;
# full source is copied in the build stages that need it.
COPY ./packages/common-utils/package.json ./packages/common-utils/package.json
COPY ./packages/api/jest.config.js ./packages/api/tsconfig.json ./packages/api/tsconfig.build.json ./packages/api/package.json ./packages/api/
COPY ./packages/app/jest.config.js ./packages/app/tsconfig.json ./packages/app/tsconfig.build.json ./packages/app/package.json ./packages/app/next.config.mjs ./packages/app/mdx.d.ts ./packages/app/eslint.config.mjs ./packages/app/
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
RUN yarn install --mode=skip-build && yarn cache clean
# Use BuildKit cache mount to persist Yarn download cache across builds,
# so unchanged packages aren't re-downloaded even when yarn.lock changes.
RUN --mount=type=cache,target=/root/.yarn/berry/cache,id=yarn-cache \
yarn install --mode=skip-build && yarn cache clean
## common-utils builder ############################################################################
# Separate stage so common-utils build is cached independently of API/App source changes.
FROM node_base AS common-utils-builder
COPY ./packages/common-utils ./packages/common-utils
RUN yarn workspace @hyperdx/common-utils run build
## API/APP Builder Image ##########################################################################
@ -47,6 +60,9 @@ FROM node_base AS builder
WORKDIR /app
# Copy pre-built common-utils from its dedicated build stage
COPY --from=common-utils-builder /app/packages/common-utils ./packages/common-utils
COPY --from=api ./src ./packages/api/src
COPY --from=api ./bin ./packages/api/bin
COPY --from=app ./src ./packages/app/src
@ -58,9 +74,10 @@ COPY --from=app ./types ./packages/app/types
ENV NEXT_TELEMETRY_DISABLED=1
ENV NEXT_OUTPUT_STANDALONE=true
ENV NEXT_PUBLIC_IS_LOCAL_MODE=false
ENV NX_DAEMON=false
RUN npx nx run-many --target=build --projects=@hyperdx/common-utils,@hyperdx/api,@hyperdx/app
RUN rm -rf node_modules && yarn workspaces focus @hyperdx/api --production
RUN yarn workspace @hyperdx/api run build && \
yarn workspace @hyperdx/app run build
RUN --mount=type=cache,target=/root/.yarn/berry/cache,id=yarn-cache \
rm -rf node_modules && yarn workspaces focus @hyperdx/api --production
# prod ############################################################################################

View file

@ -5,30 +5,48 @@ WORKDIR /app
COPY .yarn ./.yarn
COPY .yarnrc.yml yarn.lock package.json nx.json .prettierrc .prettierignore ./tsconfig.base.json ./
COPY ./packages/common-utils ./packages/common-utils
# Only copy package.json for workspace resolution during yarn install;
# full source is copied in the build stages that need it.
COPY ./packages/common-utils/package.json ./packages/common-utils/package.json
COPY ./packages/api/jest.config.js ./packages/api/tsconfig.json ./packages/api/tsconfig.build.json ./packages/api/package.json ./packages/api/
COPY ./packages/api/bin ./packages/api/bin
RUN yarn install --mode=skip-build && yarn cache clean
# Use BuildKit cache mount to persist Yarn download cache across builds,
# so unchanged packages aren't re-downloaded even when yarn.lock changes.
RUN --mount=type=cache,target=/root/.yarn/berry/cache,id=yarn-cache \
yarn install --mode=skip-build && yarn cache clean
## dev #############################################################################################
FROM base AS dev
COPY ./packages/common-utils ./packages/common-utils
EXPOSE 8000
ENTRYPOINT ["npx", "nx", "run", "@hyperdx/api:dev"]
## common-utils-builder ############################################################################
# Separate stage so common-utils build is cached independently of API source changes.
FROM base AS common-utils-builder
COPY ./packages/common-utils ./packages/common-utils
RUN yarn workspace @hyperdx/common-utils run build
## builder #########################################################################################
FROM base AS builder
ENV NX_DAEMON false
# Copy pre-built common-utils from its dedicated build stage
COPY --from=common-utils-builder /app/packages/common-utils ./packages/common-utils
COPY ./packages/api/src ./packages/api/src
RUN npx nx run-many --target=build --projects=@hyperdx/common-utils,@hyperdx/api
RUN rm -rf node_modules && yarn workspaces focus @hyperdx/api --production
RUN yarn workspace @hyperdx/api run build
RUN --mount=type=cache,target=/root/.yarn/berry/cache,id=yarn-cache \
rm -rf node_modules && yarn workspaces focus @hyperdx/api --production
## prod ############################################################################################

View file

@ -6,23 +6,40 @@ WORKDIR /app
COPY .yarn ./.yarn
COPY .yarnrc.yml yarn.lock package.json nx.json .prettierrc .prettierignore ./tsconfig.base.json ./
COPY ./packages/common-utils ./packages/common-utils
# Only copy package.json for workspace resolution during yarn install;
# full source is copied in the build stages that need it.
COPY ./packages/common-utils/package.json ./packages/common-utils/package.json
COPY ./packages/app/jest.config.js ./packages/app/tsconfig.json ./packages/app/tsconfig.build.json ./packages/app/package.json ./packages/app/next.config.mjs ./packages/app/mdx.d.ts ./packages/app/eslint.config.mjs ./packages/app/
RUN yarn install --mode=skip-build && yarn cache clean
RUN --mount=type=cache,target=/root/.yarn/berry/cache,id=yarn-cache \
yarn install --mode=skip-build && yarn cache clean
## dev #############################################################################################
FROM base AS dev
COPY ./packages/common-utils ./packages/common-utils
EXPOSE 8080
ENTRYPOINT ["npx", "nx", "run", "@hyperdx/app:dev"]
## common-utils-builder ############################################################################
# Separate stage so common-utils build is cached independently of App source changes.
FROM base AS common-utils-builder
COPY ./packages/common-utils ./packages/common-utils
RUN yarn workspace @hyperdx/common-utils run build
## builder #########################################################################################
# Rebuild the source code only when needed
FROM base AS builder
# Copy pre-built common-utils from its dedicated build stage
COPY --from=common-utils-builder /app/packages/common-utils ./packages/common-utils
# Expose custom env variables to the browser (needs NEXT_PUBLIC_ prefix)
# doc: https://nextjs.org/docs/pages/building-your-application/configuring/environment-variables#bundling-environment-variables-for-the-browser
ARG OTEL_EXPORTER_OTLP_ENDPOINT
@ -31,15 +48,15 @@ ARG IS_LOCAL_MODE
ENV NEXT_PUBLIC_OTEL_EXPORTER_OTLP_ENDPOINT $OTEL_EXPORTER_OTLP_ENDPOINT
ENV NEXT_PUBLIC_OTEL_SERVICE_NAME $OTEL_SERVICE_NAME
ENV NEXT_PUBLIC_IS_LOCAL_MODE $IS_LOCAL_MODE
ENV NX_DAEMON false
COPY ./packages/app/src ./packages/app/src
COPY ./packages/app/pages ./packages/app/pages
COPY ./packages/app/public ./packages/app/public
COPY ./packages/app/styles ./packages/app/styles
COPY ./packages/app/types ./packages/app/types
RUN npx nx run-many --target=build --projects=@hyperdx/common-utils,@hyperdx/app
RUN rm -rf node_modules && yarn workspaces focus @hyperdx/app --production
RUN yarn workspace @hyperdx/app run build
RUN --mount=type=cache,target=/root/.yarn/berry/cache,id=yarn-cache \
rm -rf node_modules && yarn workspaces focus @hyperdx/app --production
## prod ############################################################################################