Closes HDX-3236
# Summary
This PR fixes an error that occurs when a metricName/metricType is set for a dashboard tile configuration, despite the queried source not being a metric source.
1. Updates in DBEditTimeChartForm prevent us from saving configurations with metricName/metricType for non metric sources
2. Updates in DBDashboardPage ensure that metricName/metricType is ignored for any saved configurations for non-metric sources.
## Demo
A new tile would be saved with a metricName/Type incorrectly when
1. Create the tile
2. Select a metric source
3. Select a metric name
4. Switch back to a non-metric source
5. Save
And the Dashboard tile would then error:
<img width="1288" height="1012" alt="Screenshot 2026-01-23 at 2 39 38 PM" src="https://github.com/user-attachments/assets/4fa4b0bf-355e-47bb-a504-cd03e0dca2d0" />
Now, the configuration is not saved with metricName/Type, and the dashboard does not error for a saved configuration that has a metricName/Type:
<img width="769" height="423" alt="Screenshot 2026-01-23 at 2 43 04 PM" src="https://github.com/user-attachments/assets/92af36aa-dd46-47b8-ae59-d0e4bfcb28af" />
Closes HDX-3245
# Summary
This PR updates the Lucene to SQL compilation process to generate conditions using `hasAllTokens` when the target column has a text index defined.
`hasAllTokens` has a couple of limitations which are solved for:
1. The `needle` argument must be no more than 64 tokens, or `hasAllTokens` will error. To support search terms with more than 64 tokens, terms are first broken up into batches of 50 tokens, each batch is passed to a separate `hasAllTokens` call. When multiple `hasAllTokens` calls are used, we also use substring matching `lower(Body) LIKE '%term with many tokens...%'`.
2. `hasAllTokens` may only be used when `enable_full_text_index = 1`. The existence of a text index does not guarantee that `enable_full_text_index = 1`, since the text index could have been created with a query that explicitly specified `SETTINGS enable_full_text_index = 1`. We cannot set this option in every query HyperDX makes, because the setting was not available prior to v25.12. To solve for this, we check the value of `enable_full_text_index` in `system.settings`, and only use `hasAllTokens` if the setting exists and is enabled.
## Testing Setup
### Enable Full Text Index
First, make sure you're running at least ClickHouse 25.12.
Then, update the ClickHouse `users.xml`'s default profile with the following (or otherwise update your user's profile):
```xml
<clickhouse>
<profiles>
<default>
...
<enable_full_text_index>1</enable_full_text_index>
</default>
</profiles>
...
<clickhouse>
```
### Add a Full Text Index
```sql
ALTER TABLE otel_logs ADD INDEX text_idx(Body)
TYPE text(tokenizer=splitByNonAlpha, preprocessor=lower(Body))
SETTINGS enable_full_text_index=1;
ALTER TABLE otel_logs MATERIALIZE INDEX text_idx;
```
## Limitations
1. We currently only support the `splitByNonAlpha` tokenizer. If the text index is created with a different tokenizer, `hasAllTokens` will not be used. If needed, this limitation can be removed in the future by implementing `tokenizeTerm`, `termContainsSeparators`, and token batching logic specific to the other tokenizers.
2. This requires the latest (Beta) version of the full text index and related setting, available in ClickHouse v25.12.
Revisit the bug fix for https://github.com/hyperdxio/hyperdx/pull/1614.
The alias map should be used in useRowWhere hook
Ref: HDX-3196
Example:
For select
```
Timestamp,ServiceName,SeverityText,Body AS b, concat(b, 'blabla')
```
The generated query from useRowWhere is
```
WITH (Body) AS b
SELECT
*,
Timestamp AS "__hdx_timestamp",
Body AS "__hdx_body",
TraceId AS "__hdx_trace_id",
SpanId AS "__hdx_span_id",
SeverityText AS "__hdx_severity_text",
ServiceName AS "__hdx_service_name",
ResourceAttributes AS "__hdx_resource_attributes",
LogAttributes AS "__hdx_event_attributes"
FROM
DEFAULT.otel_logs
WHERE
(
Timestamp = parseDateTime64BestEffort('2026-01-20T06:11:00.170000000Z', 9)
AND ServiceName = 'hdx-oss-dev-api'
AND SeverityText = 'info'
AND Body = 'Received alert metric [saved_search source]'
AND concat(b, 'blabla') = 'Received alert metric [saved_search source]blabla'
AND TimestampTime = parseDateTime64BestEffort('2026-01-20T06:11:00Z', 9)
)
LIMIT
1
```
Closes HDX-3220
Closes HDX-1718
Closes HDX-3205
# Summary
This PR adds an option that allows users to customize the 0-fill behavior on time charts. The default behavior remains to fill all empty intervals with 0. The user can now disable the filling behavior. When fill is disabled, series will appear to be interpolated.
This PR also consolidates various display settings into a drawer, replacing the existing Number Format drawer. In the process, various form-related bugs were fixed in the drawer, and micro/nano second input factors were added.
## New Chart Display Settings Drawer
<img width="1697" height="979" alt="Screenshot 2026-01-20 at 9 10 59 AM" src="https://github.com/user-attachments/assets/1683666a-7c56-4018-8e5b-2c6c814f0cd2" />
## Zero-fill behavior
Enabled (default):
<img width="1458" height="494" alt="Screenshot 2026-01-20 at 9 12 45 AM" src="https://github.com/user-attachments/assets/0306644e-d2ff-46d6-998b-eb458d5c9ccc" />
Disabled:
<img width="1456" height="505" alt="Screenshot 2026-01-20 at 9 12 37 AM" src="https://github.com/user-attachments/assets/f084887e-4099-4365-af4f-73eceaf5dc3d" />
Sometimes when generating charts, the page would crash.
<img width="4278" height="1886" alt="image" src="https://github.com/user-attachments/assets/befe9d95-9eb6-472f-8e13-792c6056b0f5" />
The fix was to correct the types with the server (so server->app is strongly typed). Sever still sends no where clause, but previously frontend was typed to expect where clause.
This PR also ports some changes from ee to oss to avoid drift between the projects
Fixes HDX-3227
## Overview
Adds support for Azure AI Anthropic API endpoints alongside the existing direct Anthropic API, with an extensible architecture that simplifies future integration of additional AI providers (OpenAI, Azure OpenAI, Google Gemini, etc.).
## Problem
HyperDX currently only supports Anthropic's direct API for AI-powered query assistance. Organizations using Azure AI services cannot integrate Anthropic models through Azure's infrastructure.
## Solution
### 1. Core Refactoring
**New `controllers/ai.ts`:**
- Centralized `getAIModel()` function for all AI configuration
- Multi-provider architecture
- Clean separation of concerns from business logic
**Updated `config.ts`:**
- Provider-agnostic environment variables
- Backward compatibility with legacy configuration
- Clear migration path for existing deployments
**Simplified `routers/api/ai.ts`:**
- Removed inline AI configuration
- Single line: `const model = getAIModel()`
- Business logic unchanged
### 2. Configuration
#### New Environment Variables (Recommended)
```AI_PROVIDER=anthropic # Provider selection
AI_API_KEY=your-api-key # API key for any provider
AI_BASE_URL=https://... # Optional: Custom endpoint
AI_MODEL_NAME=model-name # Optional: Model/deployment name
```
#### Legacy Variables (Still Supported)
```
ANTHROPIC_API_KEY=sk-ant-api03-xxx # Auto-detected if no AI_PROVIDER set
```
### Fixes#1588
Co-authored-by: Brandon Pereira <7552738+brandon-pereira@users.noreply.github.com>
Closes HDX-3154
This PR adds a feature that allows the user to add settings to a source. These settings are then added to the end of every query that is rendered through the `renderChartConfig` function, along with any other chart specific settings.
See: https://clickhouse.com/docs/sql-reference/statements/select#settings-in-select-query
Most of the work was to pass the `source` or `source.querySettings` value through the code to the `renderChartConfig` calls and to update the related tests. There are also some UI changes in the `SourceForm` components.
`SQLParser.Parser` from the `node-sql-parser` throws an error when it encounters a SETTINGS clause in a sql string, so a function was added to remove that clause from any sql that is passed to the parser. It assumes that the SETTINGS clause will always be at the end of the sql string, it removes any part of the string including and after the SETTINGS clause.
https://github.com/user-attachments/assets/7ac3b852-2c86-4431-88bc-106f982343bb
Closes HDX-2501
# Summary
This PR adds tests for dashboard filters.
- Create filter (from both log and metric sources)
- Delete filter
- Filters are populated with values from the source
- Filters are applied to dashboard tiles
Closes HDX-2873
# Summary
This PR backports a change to the event patterns sampling query from the EE repo. This change makes the event patterns sample random subset of the rows, rather than just a subset of the rows based on a LIMIT.
Closes HDX-3163
# Summary
This PR fixes a couple of accuracy issues in the Inserts per Table chart on the ClickHouse dashboard
1. Previously, written rows and written bytes would be 0 for any async inserts. To account for async insert rows/bytes, we now filters for AsyncInsertFlush events in the query log
2. Previously, insert queries were double-counted because we were counting both QueryStart and QueryFinish events. Now we will only count QueryStart events.
<img width="1366" height="415" alt="Screenshot 2026-01-14 at 10 30 57 AM" src="https://github.com/user-attachments/assets/b1faa813-7a84-4009-8145-9a04338413e4" />
<img width="1366" height="417" alt="Screenshot 2026-01-14 at 10 30 53 AM" src="https://github.com/user-attachments/assets/899ce3f0-b1ea-4a9b-aede-b6d7408ac4d2" />
<img width="1368" height="420" alt="Screenshot 2026-01-14 at 10 30 44 AM" src="https://github.com/user-attachments/assets/47d18932-5972-4be0-8a36-0c0fa7e3c995" />
We have indices like `INDEX idx_log_attr_key mapKeys(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1` that track whether a key likely exists for one of the maps in a granule. ClickHouse actually rarely uses this, mostly just for strict equality queries. We have many more scenarios where this is useful.
This PR adds an expression to the rendered SQL for many Lucene queries. For a full list of scenarios, check the added test cases.
The condition added is `indexHint(mapContains(LogAttributes, 'key'))`. This should never change the outcome of a query because `indexHint` always returns `true`, it just hints to the planner that an index can be used. `mapContains` is the specific condition that tells ClickHouse.
Closes HDX-3070
Closes HDX-3066
# Summary
This PR improves the performance of Search and Dashboard filters by querying available filter values from materialized views, when possible. The existing `useMultipleGetKeyValues` has been updated to make use of `getKeyValuesWithMVs`, which works as follows:
1. Identify which materialized views support each of the requested keys. Keys must be `dimensionColumns` in the materialized view, the materialized view must support the provided date range, and the materialized view must support the provided filters (determined by running an EXPLAIN query).
2. Split the keys into groups based on which Materialized view can provide their values. Query values for each group using the existing `getKeyValues` function. Sampling is disabled because it is assumed that MVs are small enough to be queried without sampling.
3. Query any keys which are not supported by any materialized view from the base table.
To reduce the number of EXPLAIN queries required to support this, and to generally decrease the number of concurrent requests for filters, Dashboard filter value queries are now batched by source. Values for each batch are then queried using `getKeyValuesWithMVs` (described above).
Other fixes:
1. I've also updated the various filter functions and hooks to support abort signals, so that filter queries are canceled when a query value is no longer needed.
2. The getKeyValues cache key now includes `where` and `filters`, so that the filter values correctly update when new filters or where conditions are added on the search page.
Currently, if you create a source (ex Metrics Source) then switch it to a traces source (as an example) it does not remove all the keys that only exist on metrics source. This causes the app to error when rendering some charts.
Fixes HDX-3136
1. Ensures that multiple scrollbars are not shown for nested panels (click row -> surrounding context -> repeat a few times)
<img width="1402" height="876" alt="scrollbars" src="https://github.com/user-attachments/assets/f41c99e5-5fcb-47fa-9c40-243dbd926291" />
2. In the same state as above, closing the panel (clicking outside) then clicking another row caused the subpanel to re-open, this isn't ideal (should open to root drawer)
3. fix issue with drawers where you can scroll, then open a nested drawer, the nested drawer would appear incorrectly
<img width="910" height="878" alt="Screenshot 2026-01-09 at 5 10 29 PM" src="https://github.com/user-attachments/assets/fd1fbc0c-4453-46fb-b310-2323ec2792e2" />
Fixes HDX-3171
Closes HDX-3189
# Summary
This PR fixes a bug that caused the services dashboard to reset to the default time range on each page load, rather than respecting the URL date range params.
The cause was that a couple of `useEffects` were firing at page load, and submitting a new date range.
Closes HDX-3094
# Summary
This PR standardizes available granularities and inferred/auto granularities throughout the app
1. A duplicate convertDateRangeToGranularityString implementation was removed.
2. 10 minute granularity is no longer auto-inferred, because it (in combination with 15 minutes) breaks the property that all granularities are multiples of smaller granularities. Since MVs are only used when the chart granularity is a multiple of the MV granularity, we want to minimize the chance that a MV is 10 minutes and the chart is 15 minutes, or vice versa. To this end, MVs only support 15 minute granularity, and not 10 minute granularity (to align with alerts). By removing the 10 minute granularity from auto granularity inference, we decrease the chance of automatically choosing a granularity that can't be used with an MV.
3. The max buckets argument was standardized to a constant (DEFAULT_AUTO_GRANULARITY_MAX_BUCKETS) with value 60. It is now an optional argument, only passed when a non-default value is required.
Closes HDX-3124
# Summary
This PR makes the following changes
1. Date ranges for all MV queries are now aligned to the MV Granularity
2. Each chart type now has an indicator when the date range has been adjusted to align with either the MV Granularity or (in the case of Line/Bar charts) the Chart Granularity.
3. The useQueriedChartConfig, useRenderedSqlChartConfig, and useOffsetPaginatedQuery hooks have been updated to get the MV-optimized chart configuration from the useMVOptimizationExplanation, which allows us to share the `EXPLAIN ESTIMATE` query results between the MV Optimization Indicator (the lightning bolt icon on each chart) and the chart itself. This roughly halves the number of EXPLAIN ESTIMATE queries that are made.
## Demo
<img width="1628" height="1220" alt="Screenshot 2026-01-08 at 11 42 39 AM" src="https://github.com/user-attachments/assets/80a06e3a-bbfc-4193-b6b7-5e0056c588d3" />
<img width="1627" height="1131" alt="Screenshot 2026-01-08 at 11 40 54 AM" src="https://github.com/user-attachments/assets/69879e3d-3a83-4c4d-9604-0552a01c17d7" />
## Testing
To test locally with an MV, you can use the following DDL
<details>
<summary>DDL For an MV</summary>
```sql
CREATE TABLE default.metrics_rollup_1m
(
`Timestamp` DateTime,
`ServiceName` LowCardinality(String),
`SpanKind` LowCardinality(String),
`StatusCode` LowCardinality(String),
`count` SimpleAggregateFunction(sum, UInt64),
`sum__Duration` SimpleAggregateFunction(sum, UInt64),
`avg__Duration` AggregateFunction(avg, UInt64),
`quantile__Duration` AggregateFunction(quantileTDigest(0.5), UInt64),
`min__Duration` SimpleAggregateFunction(min, UInt64),
`max__Duration` SimpleAggregateFunction(max, UInt64)
)
ENGINE = AggregatingMergeTree
PARTITION BY toDate(Timestamp)
ORDER BY (Timestamp, StatusCode, SpanKind, ServiceName)
SETTINGS index_granularity = 8192;
CREATE MATERIALIZED VIEW default.metrics_rollup_1m_mv TO default.metrics_rollup_1m
(
`Timestamp` DateTime,
`ServiceName` LowCardinality(String),
`SpanKind` LowCardinality(String),
`version` LowCardinality(String),
`StatusCode` LowCardinality(String),
`count` UInt64,
`sum__Duration` Int64,
`avg__Duration` AggregateFunction(avg, UInt64),
`quantile__Duration` AggregateFunction(quantileTDigest(0.5), UInt64),
`min__Duration` SimpleAggregateFunction(min, UInt64),
`max__Duration` SimpleAggregateFunction(max, UInt64)
)
AS SELECT
toStartOfMinute(Timestamp) AS Timestamp,
ServiceName,
SpanKind,
StatusCode,
count() AS count,
sum(Duration) AS sum__Duration,
avgState(Duration) AS avg__Duration,
quantileTDigestState(0.5)(Duration) AS quantile__Duration,
minSimpleState(Duration) AS min__Duration,
maxSimpleState(Duration) AS max__Duration
FROM default.otel_traces
GROUP BY
Timestamp,
ServiceName,
SpanKind,
StatusCode;
```
</details>