## Summary Update `packages/otel-collector/builder-config.yaml` to include commonly-used components from the upstream [opentelemetry-collector](https://github.com/open-telemetry/opentelemetry-collector) core and [opentelemetry-collector-contrib](https://github.com/open-telemetry/opentelemetry-collector-contrib) distributions. This gives users more flexibility in their custom OTel configs without pulling in the entire contrib distribution (which causes very long compile times). Also adds Go module and build cache mounts to the OCB Docker build stage for faster rebuilds, and bumps CI timeouts for integration and smoke test jobs to account for the larger binary. ### Core extensions added (2) - `memorylimiterextension` — memory-based limiting at the extension level - `zpagesextension` — zPages debugging endpoints ### Contrib receivers added (4) - `dockerstatsreceiver` — container metrics from Docker - `filelogreceiver` — tail log files - `k8sclusterreceiver` — Kubernetes cluster-level metrics - `kubeletstatsreceiver` — node/pod/container metrics from kubelet ### Contrib processors added (12) - `attributesprocessor` — insert/update/delete/hash attributes - `cumulativetodeltaprocessor` — convert cumulative metrics to delta - `filterprocessor` — drop unwanted telemetry - `groupbyattrsprocessor` — reassign resource attributes - `k8sattributesprocessor` — enrich telemetry with k8s metadata - `logdedupprocessor` — deduplicate repeated log entries - `metricstransformprocessor` — rename/aggregate/transform metrics - `probabilisticsamplerprocessor` — percentage-based sampling - `redactionprocessor` — mask/remove sensitive data - `resourceprocessor` — modify resource attributes - `spanprocessor` — rename spans, extract attributes - `tailsamplingprocessor` — sample traces based on policies ### Contrib extensions added (1) - `filestorage` — persistent file-based storage (used by clickhouse exporter sending queue in EE OpAMP controller) ### Other changes - **Docker cache mounts**: Added `--mount=type=cache` for Go module and build caches in the OCB builder stage of both `docker/otel-collector/Dockerfile` and `docker/hyperdx/Dockerfile` - **CI timeouts**: Bumped `integration` and `otel-smoke-test` jobs from 8 to 16 minutes in `.github/workflows/main.yml` All existing HyperDX-specific components are preserved unchanged. ### How to test locally or on Vercel 1. Build the OTel Collector Docker image — verify OCB resolves all listed modules 2. Provide a custom OTel config that uses one of the newly-added components and verify it loads 3. Verify existing HyperDX OTel pipeline still functions ### References - Linear Issue: https://linear.app/clickhouse/issue/HDX-4029 - Upstream core builder-config: https://github.com/open-telemetry/opentelemetry-collector/blob/main/cmd/otelcorecol/builder-config.yaml - Upstream contrib builder-config: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/cmd/otelcontribcol/builder-config.yaml |
||
|---|---|---|
| .changeset | ||
| .claude | ||
| .config | ||
| .cursor | ||
| .github | ||
| .husky | ||
| .opencode/commands | ||
| .vex | ||
| .vscode | ||
| .yarn/releases | ||
| agent_docs | ||
| docker | ||
| docs/assets | ||
| packages | ||
| proxy | ||
| scripts | ||
| smoke-tests/otel-collector | ||
| .env | ||
| .gitattributes | ||
| .gitignore | ||
| .kodiak.toml | ||
| .mcp.json | ||
| .nvmrc | ||
| .prettierignore | ||
| .prettierrc | ||
| .yarnrc.yml | ||
| AGENTS.md | ||
| CLAUDE.md | ||
| CONTRIBUTING.md | ||
| DEPLOY.md | ||
| docker-compose.ci.yml | ||
| docker-compose.dev.yml | ||
| docker-compose.yml | ||
| knip.json | ||
| LICENSE | ||
| LOCAL.md | ||
| Makefile | ||
| MCP.md | ||
| nx.json | ||
| package.json | ||
| README.md | ||
| tsconfig.base.json | ||
| version.sh | ||
| yarn.lock | ||
HyperDX
HyperDX, a core component of ClickStack, helps engineers quickly figure out why production is broken by making it easy to search & visualize logs and traces on top of any ClickHouse cluster (imagine Kibana, for ClickHouse).
Documentation • Chat on Discord • Live Demo • Bug Reports • Contributing • Website
- 🕵️ Correlate/search logs, metrics, session replays and traces all in one place
- 📝 Schema agnostic, works on top of your existing ClickHouse schema
- 🔥 Blazing fast searches & visualizations optimized for ClickHouse
- 🔍 Intuitive full-text search and property search syntax (ex.
level:err), SQL optional! - 📊 Analyze trends in anomalies with event deltas
- 🔔 Set up alerts in just a few clicks
- 📈 Dashboard high cardinality events without a complex query language
{Native JSON string querying- ⚡ Live tail logs and traces to always get the freshest events
- 🔭 OpenTelemetry supported out of the box
- ⏱️ Monitor health and performance from HTTP requests to DB queries (APM)
Spinning Up HyperDX
HyperDX can be deployed as part of ClickStack, which includes ClickHouse, HyperDX, OpenTelemetry Collector and MongoDB.
docker run -p 8080:8080 -p 4317:4317 -p 4318:4318 docker.hyperdx.io/hyperdx/hyperdx-all-in-one
Afterwards, you can visit http://localhost:8080 to access the HyperDX UI.
If you already have an existing ClickHouse instance, want to use a single container locally, or are looking for production deployment instructions, you can view the different deployment options in our deployment docs.
If your server is behind a firewall, you'll need to open/forward port 8080, 8000 and 4318 on your firewall for the UI, API and OTel collector respectively.
We recommend at least 4GB of RAM and 2 cores for testing.
Hosted ClickHouse Cloud
You can also deploy HyperDX with ClickHouse Cloud, you can sign up for free and get started in just minutes.
Instrumenting Your App
To get logs, metrics, traces, session replay, etc into HyperDX, you'll need to instrument your app to collect and send telemetry data over to your HyperDX instance.
We provide a set of SDKs and integration options to make it easier to get started with HyperDX, such as Browser, Node.js, and Python
You can find the full list in our docs.
OpenTelemetry
Additionally, HyperDX is compatible with OpenTelemetry, a vendor-neutral standard for instrumenting your application backed by CNCF. Supported languages/platforms include:
- Kubernetes
- Javascript
- Python
- Java
- Go
- Ruby
- PHP
- .NET
- Elixir
- Rust
(Full list here)
Once HyperDX is running, you can point your OpenTelemetry SDK to the
OpenTelemetry collector spun up at http://localhost:4318.
Contributing
We welcome all contributions! There's many ways to contribute to the project, including but not limited to:
- Opening a PR (Contribution Guide)
- Submitting feature requests or bugs
- Improving our product or contribution documentation
- Voting on open issues or contributing use cases to a feature request
Motivation
Our mission is to help engineers ship reliable software. To enable that, we believe every engineer needs to be able to easily leverage production telemetry to quickly solve burning production issues.
However, in our experience, the existing tools we've used tend to fall short in a few ways:
- They're expensive, and the pricing has failed to scale with TBs of telemetry becoming the norm, leading to teams aggressively cutting the amount of data they can collect.
- They're hard to use, requiring full-time SREs to set up, and domain experts to use confidently.
- They requiring hopping from tool to tool (logs, session replay, APM, exceptions, etc.) to stitch together the clues yourself.
We hope you give HyperDX in ClickStack a try and let us know how we're doing!
Contact
HyperDX Usage Data
HyperDX collects anonymized usage data for open source deployments. This data
supports our mission for observability to be available to any team and helps
support our open source product run in a variety of different environments.
While we hope you will continue to support our mission in this way, you may opt
out of usage data collection by setting the USAGE_STATS_ENABLED environment
variable to false. Thank you for supporting the development of HyperDX!