perf: Implement query chunking for charts (#1233)

# Summary

Closes HDX-2310
Closes HDX-2616

This PR implements chunking of chart queries to improve performance of charts on large data sets and long time ranges. Recent data is loaded first, then older data is loaded one-chunk-at-a-time until the full chart date range has been queried.

https://github.com/user-attachments/assets/83333041-9e41-438a-9763-d6f6c32a0576

## Performance Impacts

### Expectations

This change is intended to improve performance in a few ways:

1. Queries over long time ranges are now much less likely to time out, since the range is chunked into several smaller queries
2. Average memory usage should decrease, since the total result size and number of rows being read are smaller
3. _Perceived_ latency of queries over long date ranges is likely to decrease, because users will start seeing charts render (more recent) data as soon as the first chunk is queried, instead of after the entire date range has been queried. **However**, _total_ latency to display results for the entire date range is likely to increase, due to additional round-trip network latency being added for each additional chunk.

### Measured Results

Overall, the results match the expectations outlined above.

- Total latency changed between ~-4% and ~25%
- Average memory usage decreased by between 18% and 80%

<details>
<summary>Scenarios and data</summary>

In each of the following tests:

1. Queries were run 5 times before starting to measure, to ensure data is filesystem cached.
2. Queries were then run 3 times. The results shown are the median result from the 3 runs.

#### Scenario: Log Search Histogram in Staging V2, 2 Day Range, No Filter

|   |  Total Latency | Memory Usage (Avg) | Memory Usage (Max)  |  Chunk Count |
|---|---|---|---|---|
|  Original |  5.36 |  409.23 MiB |  409.23 MiB | 1  |
|  Chunked |  5.14 | 83.06 MiB  | 232.69 MiB  |  4 |

#### Scenario: Log Search Histogram in Staging V2, 14 Day Range, No Filter

|   |  Total Latency | Memory Usage (Avg) | Memory Usage (Max)  |  Chunk Count |
|---|---|---|---|---|
|  Original |  26.56 |  383.63 MiB |  383.63 MiB | 1  |
|  Chunked |  33.08 | 130.00 MiB  | 241.21 MiB  |  16 |

#### Scenario: Chart Explorer Line Chart with p90 and p99 trace durations, Staging V2 Traces, Filtering for "GET" spans, 7 Day range

|   |  Total Latency | Memory Usage (Avg) | Memory Usage (Max)  |  Chunk Count |
|---|---|---|---|---|
|  Original |  2.79 |  346.12 MiB |  346.12 MiB | 1  |
|  Chunked |  3.26 | 283.00 MiB  | 401.38 MiB  |  9 |

</details>

## Implementation Notes

<details>
<summary>When is chunking used?</summary>
Chunking is used when all of the following are true:

1. `granularity` and `timestampValueExpression` are defined in the config. This ensures that the query is already being bucketed. Without bucketing, chunking would break aggregation queries, since groups can span multiple chunks.
4. `dateRange` is defined in the config. Without a date range, we'd need an unbounded set of chunks or the start and end chunks would have to be unbounded at their start and end, respectively.
5. The config is not a metrics query. Metrics queries have complex logic which we want to avoid breaking with the initial delivery of this feature.
6. The consumer of `useQueriedChartConfig` does not pass the `disableQueryChunking: true` option. This option is provided to disable chunking when necessary.
</details>

<details>
<summary>How are time windows chosen?</summary>

1. First, generate the windows as they are generated for the existing search chunking feature (eg. 6 hours back, 6 hours back, 12 hours back, 24 hours back...)
4. Then, the start and end of each window is aligned to the start of a time bucket that depends on the "granularity" of the chart.
7. The first and last windows are shortened or extended so that the combined date range of all of the windows matches the start and end of the original config.
</details>

<details>
<summary>Which order are the chunks queried in?</summary>

Chunks are queried sequentially, most-recent first, due to the expectation that more recent data is typically more important to the user. Unlike with `useOffsetPaginatedSearch`, we are not paginating the data beyond the chunks, and all data is typically displayed together, so there is no need to support "ascending" order.
</details>

<details>
<summary>Does this improve client-side caching behavior?</summary>

One theoretical way in which query chunking could improve performance to enable client-side caching of individual chunks, which could then be re-used if the same query is run over a longer time range.

Unfortunately, using streamedQuery, react-query stores the entire time range as one item in the cache, so it does not re-use individual chunks or "pages" from another query.

We could accomplish this improvement by using useQueries instead of streamedQuery or useInfiniteQuery. In that case, we'd treat each chunk as its own query. This would require a number of changes:

1. Our query key would have to include the chunk's window duration
2. We'd need some hacky way of making the useQueries requests fire in sequence. This can be done using `enabled` but requires some additional state to figure out whether the previous query is done.
5. We'd need to emulate the return value of a useQuery using the useQueries result, or update consumers.
</details>
This commit is contained in:
Drew Davis 2025-10-27 15:02:59 +01:00 committed by GitHub
parent 21614b94aa
commit ff86d40006
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
11 changed files with 1549 additions and 123 deletions

View file

@ -0,0 +1,5 @@
---
"@hyperdx/app": patch
---
feat: Implement query chunking for charts

View file

@ -65,8 +65,9 @@ function DBTimeChartComponent({
const { data, isLoading, isError, error, isPlaceholderData, isSuccess } =
useQueriedChartConfig(queriedConfig, {
placeholderData: (prev: any) => prev,
queryKey: [queryKeyPrefix, queriedConfig],
queryKey: [queryKeyPrefix, queriedConfig, 'chunked'],
enabled,
enableQueryChunking: true,
});
useEffect(() => {
@ -75,7 +76,8 @@ function DBTimeChartComponent({
}
}, [isError, isErrorExpanded, errorExpansion]);
const isLoadingOrPlaceholder = isLoading || isPlaceholderData;
const isLoadingOrPlaceholder =
isLoading || !data?.isComplete || isPlaceholderData;
const { data: source } = useSource({ id: sourceId });
const { graphResults, timestampColumn, groupKeys, lineNames, lineColors } =

View file

@ -29,10 +29,11 @@ export default function PatternTable({
const [selectedPattern, setSelectedPattern] = useState<Pattern | null>(null);
const { totalCount, isLoading: isTotalCountLoading } = useSearchTotalCount(
totalCountConfig,
totalCountQueryKeyPrefix,
);
const {
totalCount,
isLoading: isTotalCountLoading,
isTotalCountComplete,
} = useSearchTotalCount(totalCountConfig, totalCountQueryKeyPrefix);
const {
data: groupedResults,
@ -46,7 +47,8 @@ export default function PatternTable({
totalCount,
});
const isLoading = isTotalCountLoading || isGroupedPatternsLoading;
const isLoading =
isTotalCountLoading || !isTotalCountComplete || isGroupedPatternsLoading;
const sortedGroupedResults = useMemo(() => {
return Object.values(groupedResults).sort(

View file

@ -10,7 +10,7 @@ export function useSearchTotalCount(
config: ChartConfigWithDateRange,
queryKeyPrefix: string,
) {
// copied from DBTimeChart
// queriedConfig, queryKey, and enableQueryChunking match DBTimeChart so that react query can de-dupe these queries.
const { granularity } = useTimeChartSettings(config);
const queriedConfig = {
...config,
@ -22,12 +22,15 @@ export function useSearchTotalCount(
isLoading,
isError,
} = useQueriedChartConfig(queriedConfig, {
queryKey: [queryKeyPrefix, queriedConfig],
queryKey: [queryKeyPrefix, queriedConfig, 'chunked'],
staleTime: 1000 * 60 * 5,
refetchOnWindowFocus: false,
placeholderData: keepPreviousData, // no need to flash loading state when in live tail
enableQueryChunking: true,
});
const isTotalCountComplete = !!totalCountData?.isComplete;
const totalCount = useMemo(() => {
return totalCountData?.data?.reduce(
(p: number, v: any) => p + Number.parseInt(v['count()']),
@ -39,6 +42,7 @@ export function useSearchTotalCount(
totalCount,
isLoading,
isError,
isTotalCountComplete,
};
}

File diff suppressed because it is too large Load diff

View file

@ -309,6 +309,40 @@ describe('useOffsetPaginatedQuery', () => {
// Should have more pages available due to large time range
expect(result.current.hasNextPage).toBe(true);
});
it('should handle a time range with the same start and end date by generating one window', async () => {
const config = createMockChartConfig({
dateRange: [
new Date('2024-01-01T00:00:00Z'),
new Date('2024-01-01T00:00:00Z'), // same start and end date
] as [Date, Date],
});
// Mock the reader to return data for first window
mockReader.read
.mockResolvedValueOnce({
done: false,
value: [
{ json: () => ['timestamp', 'message'] },
{ json: () => ['DateTime', 'String'] },
{ json: () => ['2024-01-01T01:00:00Z', 'test log 1'] },
],
})
.mockResolvedValueOnce({ done: true });
const { result } = renderHook(() => useOffsetPaginatedQuery(config), {
wrapper,
});
await waitFor(() => expect(result.current.isLoading).toBe(false));
// Should have data from the first window
expect(result.current.data).toBeDefined();
expect(result.current.data?.window.windowIndex).toBe(0);
// Should have more pages available due to large time range
expect(result.current.hasNextPage).toBe(true);
});
});
describe('Pagination Within Time Windows', () => {

View file

@ -1,57 +1,262 @@
import { useEffect } from 'react';
import objectHash from 'object-hash';
import {
ChSql,
chSqlToAliasMap,
ClickHouseQueryError,
inferNumericColumn,
inferTimestampColumn,
parameterizedQueryToSql,
ResponseJSON,
} from '@hyperdx/common-utils/dist/clickhouse';
import { renderChartConfig } from '@hyperdx/common-utils/dist/renderChartConfig';
import { ClickhouseClient } from '@hyperdx/common-utils/dist/clickhouse/browser';
import {
DEFAULT_AUTO_GRANULARITY_MAX_BUCKETS,
isMetricChartConfig,
isUsingGranularity,
renderChartConfig,
} from '@hyperdx/common-utils/dist/renderChartConfig';
import { format } from '@hyperdx/common-utils/dist/sqlFormatter';
import { ChartConfigWithOptDateRange } from '@hyperdx/common-utils/dist/types';
import { useQuery, UseQueryOptions } from '@tanstack/react-query';
import {
ChartConfigWithDateRange,
ChartConfigWithOptDateRange,
} from '@hyperdx/common-utils/dist/types';
import {
useQuery,
useQueryClient,
UseQueryOptions,
} from '@tanstack/react-query';
import {
convertDateRangeToGranularityString,
toStartOfInterval,
} from '@/ChartUtils';
import { useClickhouseClient } from '@/clickhouse';
import { IS_MTVIEWS_ENABLED } from '@/config';
import { buildMTViewSelectQuery } from '@/hdxMTViews';
import { getMetadata } from '@/metadata';
import { generateTimeWindowsDescending } from '@/utils/searchWindows';
interface AdditionalUseQueriedChartConfigOptions {
onError?: (error: Error | ClickHouseQueryError) => void;
/**
* Queries with large date ranges can be split into multiple smaller queries to
* avoid overloading the ClickHouse server and running into timeouts. In some cases, such
* as when data is being sampled across the entire range, this chunking is not desirable
* and should be disabled.
*/
enableQueryChunking?: boolean;
}
// used for charting
type TimeWindow = {
dateRange: [Date, Date];
dateRangeEndInclusive?: boolean;
};
type TQueryFnData = Pick<ResponseJSON<any>, 'data' | 'meta' | 'rows'> & {
isComplete: boolean;
};
type TChunk = {
chunk: ResponseJSON<Record<string, string | number>>;
isComplete: boolean;
};
const shouldUseChunking = (
config: ChartConfigWithOptDateRange,
): config is ChartConfigWithDateRange & {
granularity: string;
} => {
// Granularity is required for chunking, otherwise we could break other group-bys.
if (!isUsingGranularity(config)) return false;
// Date range is required for chunking, otherwise we'd have infinite chunks, or some unbounded chunk(s).
if (!config.dateRange) return false;
// TODO: enable chunking for metric charts when we're confident chunking will not break
// complex metric queries.
if (isMetricChartConfig(config)) return false;
return true;
};
export const getGranularityAlignedTimeWindows = (
config: ChartConfigWithDateRange & { granularity: string },
windowDurationsSeconds?: number[],
): TimeWindow[] => {
const [startDate, endDate] = config.dateRange;
const windowsUnaligned = generateTimeWindowsDescending(
startDate,
endDate,
windowDurationsSeconds,
);
const granularity =
config.granularity === 'auto'
? convertDateRangeToGranularityString(
config.dateRange,
DEFAULT_AUTO_GRANULARITY_MAX_BUCKETS,
)
: config.granularity;
const windows = [];
for (const [index, window] of windowsUnaligned.entries()) {
// Align windows to chart buckets
const alignedStart =
index === windowsUnaligned.length - 1
? window.startTime
: toStartOfInterval(window.startTime, granularity);
const alignedEnd =
index === 0 ? endDate : toStartOfInterval(window.endTime, granularity);
// Skip windows that are covered by the previous window after it was aligned
if (
!windows.length ||
alignedStart < windows[windows.length - 1].dateRange[0]
) {
windows.push({
dateRange: [alignedStart, alignedEnd] as [Date, Date],
// Ensure that windows don't overlap by making all but the first (most recent) exclusive
dateRangeEndInclusive:
index === 0 ? config.dateRangeEndInclusive : false,
});
}
}
return windows;
};
async function* fetchDataInChunks({
config,
clickhouseClient,
signal,
enableQueryChunking = false,
}: {
config: ChartConfigWithOptDateRange;
clickhouseClient: ClickhouseClient;
signal: AbortSignal;
enableQueryChunking?: boolean;
}) {
const windows =
enableQueryChunking && shouldUseChunking(config)
? getGranularityAlignedTimeWindows(config)
: [undefined];
if (IS_MTVIEWS_ENABLED) {
const { dataTableDDL, mtViewDDL, renderMTViewConfig } =
await buildMTViewSelectQuery(config);
// TODO: show the DDLs in the UI so users can run commands manually
// eslint-disable-next-line no-console
console.log('dataTableDDL:', dataTableDDL);
// eslint-disable-next-line no-console
console.log('mtViewDDL:', mtViewDDL);
await renderMTViewConfig();
}
for (let i = 0; i < windows.length; i++) {
const window = windows[i];
const windowedConfig = {
...config,
...(window ?? {}),
};
const result = await clickhouseClient.queryChartConfig({
config: windowedConfig,
metadata: getMetadata(),
opts: {
abort_signal: signal,
},
});
yield { chunk: result, isComplete: i === windows.length - 1 };
}
}
/** Append the given chunk to the given accumulated result */
function appendChunk(
accumulated: TQueryFnData,
{ chunk, isComplete }: TChunk,
): TQueryFnData {
return {
data: [...(chunk.data || []), ...(accumulated?.data || [])],
meta: chunk.meta,
rows: (accumulated?.rows || 0) + (chunk.rows || 0),
isComplete,
};
}
/**
* A hook providing data queried based on the provided chart config.
*
* If all of the following are true, the query will be chunked into multiple smaller queries:
* - The config includes a dateRange, granularity, and timestampValueExpression
* - `options.enableQueryChunking` is true
*
* For chunked queries, note the following:
* - `config.limit`, if provided, is applied to each chunk, so the total number
* of rows returned may be up to `limit * number_of_chunks`.
* - The returned data will be ordered within each chunk, and chunks will
* be ordered oldest-first, by the `timestampValueExpression`.
* - `isPending` is true until the first chunk is fetched. Once the first chunk
* is available, `isPending` will be false and `isSuccess` will be true.
* `isFetching` will be true until all chunks have been fetched.
* - `data.isComplete` indicates whether all chunks have been fetched.
*/
export function useQueriedChartConfig(
config: ChartConfigWithOptDateRange,
options?: Partial<UseQueryOptions<ResponseJSON<any>>> &
options?: Partial<UseQueryOptions<TQueryFnData>> &
AdditionalUseQueriedChartConfigOptions,
) {
const clickhouseClient = useClickhouseClient();
const query = useQuery<ResponseJSON<any>, ClickHouseQueryError | Error>({
queryKey: [config],
queryFn: async ({ signal }) => {
let query = null;
if (IS_MTVIEWS_ENABLED) {
const { dataTableDDL, mtViewDDL, renderMTViewConfig } =
await buildMTViewSelectQuery(config);
// TODO: show the DDLs in the UI so users can run commands manually
// eslint-disable-next-line no-console
console.log('dataTableDDL:', dataTableDDL);
// eslint-disable-next-line no-console
console.log('mtViewDDL:', mtViewDDL);
query = await renderMTViewConfig();
const queryClient = useQueryClient();
const query = useQuery<TQueryFnData, ClickHouseQueryError | Error>({
// Include enableQueryChunking in the query key to ensure that queries with the
// same config but different enableQueryChunking values do not share a query
queryKey: [config, options?.enableQueryChunking ?? false],
// TODO: Replace this with `streamedQuery` when it is no longer experimental. Use 'replace' refetch mode.
// https://tanstack.com/query/latest/docs/reference/streamedQuery
queryFn: async context => {
const query = queryClient
.getQueryCache()
.find({ queryKey: context.queryKey, exact: true });
const isRefetch = !!query && query.state.data !== undefined;
const emptyValue: TQueryFnData = {
data: [],
meta: [],
rows: 0,
isComplete: false,
};
const chunks = fetchDataInChunks({
config,
clickhouseClient,
signal: context.signal,
enableQueryChunking: options?.enableQueryChunking,
});
let accumulatedChunks: TQueryFnData = emptyValue;
for await (const chunk of chunks) {
if (context.signal.aborted) {
break;
}
accumulatedChunks = appendChunk(accumulatedChunks, chunk);
// When refetching, the cache is not updated until all chunks are fetched.
if (!isRefetch) {
queryClient.setQueryData<TQueryFnData>(
context.queryKey,
accumulatedChunks,
);
}
}
return clickhouseClient.queryChartConfig({
config,
metadata: getMetadata(),
opts: {
abort_signal: signal,
},
});
if (isRefetch && !context.signal.aborted) {
queryClient.setQueryData<TQueryFnData>(
context.queryKey,
accumulatedChunks,
);
}
return queryClient.getQueryData(context.queryKey)!;
},
retry: 1,
refetchOnWindowFocus: false,

View file

@ -23,6 +23,11 @@ import api from '@/api';
import { getClickhouseClient } from '@/clickhouse';
import { getMetadata } from '@/metadata';
import { omit } from '@/utils';
import {
generateTimeWindowsAscending,
generateTimeWindowsDescending,
TimeWindow,
} from '@/utils/searchWindows';
type TQueryKey = readonly [
string,
@ -37,21 +42,6 @@ function queryKeyFn(
return [prefix, config, queryTimeout];
}
// Time window configuration - progressive bucketing strategy
const TIME_WINDOWS_MS = [
6 * 60 * 60 * 1000, // 6h
6 * 60 * 60 * 1000, // 6h
12 * 60 * 60 * 1000, // 12h
24 * 60 * 60 * 1000, // 24h
];
type TimeWindow = {
startTime: Date;
endTime: Date;
windowIndex: number;
direction: 'ASC' | 'DESC';
};
type TPageParam = {
windowIndex: number;
offset: number;
@ -69,65 +59,6 @@ type TData = {
pageParams: TPageParam[];
};
// Generate time windows from date range using progressive bucketing, starting at the end of the date range
function generateTimeWindowsDescending(
startDate: Date,
endDate: Date,
): TimeWindow[] {
const windows: TimeWindow[] = [];
let currentEnd = new Date(endDate);
let windowIndex = 0;
while (currentEnd > startDate) {
const windowSize =
TIME_WINDOWS_MS[windowIndex] ||
TIME_WINDOWS_MS[TIME_WINDOWS_MS.length - 1]; // use largest window size
const windowStart = new Date(
Math.max(currentEnd.getTime() - windowSize, startDate.getTime()),
);
windows.push({
endTime: new Date(currentEnd),
startTime: windowStart,
windowIndex,
direction: 'DESC',
});
currentEnd = windowStart;
windowIndex++;
}
return windows;
}
// Generate time windows from date range using progressive bucketing, starting at the beginning of the date range
function generateTimeWindowsAscending(startDate: Date, endDate: Date) {
const windows: TimeWindow[] = [];
let currentStart = new Date(startDate);
let windowIndex = 0;
while (currentStart < endDate) {
const windowSize =
TIME_WINDOWS_MS[windowIndex] ||
TIME_WINDOWS_MS[TIME_WINDOWS_MS.length - 1]; // use largest window size
const windowEnd = new Date(
Math.min(currentStart.getTime() + windowSize, endDate.getTime()),
);
windows.push({
startTime: new Date(currentStart),
endTime: windowEnd,
windowIndex,
direction: 'ASC',
});
currentStart = windowEnd;
windowIndex++;
}
return windows;
}
// Get time window from page param
function getTimeWindowFromPageParam(
config: ChartConfigWithOptTimestamp,

View file

@ -141,10 +141,11 @@ function usePatterns({
limit: { limit: samples },
});
const { data: sampleRows } = useQueriedChartConfig(
configWithPrimaryAndPartitionKey ?? config, // `config` satisfying type, never used due to `enabled` check
{ enabled: configWithPrimaryAndPartitionKey != null && enabled },
);
const { data: sampleRows, isLoading: isSampleLoading } =
useQueriedChartConfig(
configWithPrimaryAndPartitionKey ?? config, // `config` satisfying type, never used due to `enabled` check
{ enabled: configWithPrimaryAndPartitionKey != null && enabled },
);
const { data: pyodide, isLoading: isLoadingPyodide } = usePyodide({
enabled,
@ -191,7 +192,7 @@ function usePatterns({
return {
...query,
isLoading: query.isLoading || isLoadingPyodide,
isLoading: query.isLoading || isSampleLoading || isLoadingPyodide,
patternQueryConfig: configWithPrimaryAndPartitionKey,
};
}

View file

@ -0,0 +1,101 @@
export const DEFAULT_TIME_WINDOWS_SECONDS = [
6 * 60 * 60, // 6h
6 * 60 * 60, // 6h
12 * 60 * 60, // 12h
24 * 60 * 60, // 24h
];
export type TimeWindow = {
startTime: Date;
endTime: Date;
windowIndex: number;
direction: 'ASC' | 'DESC';
};
// Generate time windows from date range using progressive bucketing, starting at the end of the date range
export function generateTimeWindowsDescending(
startDate: Date,
endDate: Date,
windowDurationsSeconds: number[] = DEFAULT_TIME_WINDOWS_SECONDS,
): TimeWindow[] {
if (startDate.getTime() === endDate.getTime()) {
return [
{
startTime: startDate,
endTime: endDate,
windowIndex: 0,
direction: 'DESC',
},
];
}
const windows: TimeWindow[] = [];
let currentEnd = new Date(endDate);
let windowIndex = 0;
while (currentEnd > startDate) {
const windowSizeSeconds =
windowDurationsSeconds[windowIndex] ||
windowDurationsSeconds[windowDurationsSeconds.length - 1]; // use largest window size
const windowSizeMs = windowSizeSeconds * 1000;
const windowStart = new Date(
Math.max(currentEnd.getTime() - windowSizeMs, startDate.getTime()),
);
windows.push({
endTime: new Date(currentEnd),
startTime: windowStart,
windowIndex,
direction: 'DESC',
});
currentEnd = windowStart;
windowIndex++;
}
return windows;
}
// Generate time windows from date range using progressive bucketing, starting at the beginning of the date range
export function generateTimeWindowsAscending(
startDate: Date,
endDate: Date,
windowDurationsSeconds: number[] = DEFAULT_TIME_WINDOWS_SECONDS,
): TimeWindow[] {
if (startDate.getTime() === endDate.getTime()) {
return [
{
startTime: startDate,
endTime: endDate,
windowIndex: 0,
direction: 'ASC',
},
];
}
const windows: TimeWindow[] = [];
let currentStart = new Date(startDate);
let windowIndex = 0;
while (currentStart < endDate) {
const windowSizeSeconds =
windowDurationsSeconds[windowIndex] ||
windowDurationsSeconds[windowDurationsSeconds.length - 1]; // use largest window size
const windowSizeMs = windowSizeSeconds * 1000;
const windowEnd = new Date(
Math.min(currentStart.getTime() + windowSizeMs, endDate.getTime()),
);
windows.push({
startTime: new Date(currentStart),
endTime: windowEnd,
windowIndex,
direction: 'ASC',
});
currentStart = windowEnd;
windowIndex++;
}
return windows;
}

View file

@ -45,6 +45,9 @@ import {
splitAndTrimWithBracket,
} from '@/utils';
/** The default maximum number of buckets setting when determining a bucket duration for 'auto' granularity */
export const DEFAULT_AUTO_GRANULARITY_MAX_BUCKETS = 60;
// FIXME: SQLParser.ColumnRef is incomplete
type ColumnRef = SQLParser.ColumnRef & {
array_index?: {
@ -71,7 +74,7 @@ export function isUsingGroupBy(
return chartConfig.groupBy != null && chartConfig.groupBy.length > 0;
}
function isUsingGranularity(
export function isUsingGranularity(
chartConfig: ChartConfigWithOptDateRange,
): chartConfig is Omit<
Omit<Omit<ChartConfigWithDateRange, 'granularity'>, 'dateRange'>,
@ -467,7 +470,10 @@ function timeBucketExpr({
const unsafeInterval = {
UNSAFE_RAW_SQL:
interval === 'auto' && Array.isArray(dateRange)
? convertDateRangeToGranularityString(dateRange, 60)
? convertDateRangeToGranularityString(
dateRange,
DEFAULT_AUTO_GRANULARITY_MAX_BUCKETS,
)
: interval,
};
@ -929,7 +935,10 @@ function renderDeltaExpression(
) {
const interval =
chartConfig.granularity === 'auto' && Array.isArray(chartConfig.dateRange)
? convertDateRangeToGranularityString(chartConfig.dateRange, 60)
? convertDateRangeToGranularityString(
chartConfig.dateRange,
DEFAULT_AUTO_GRANULARITY_MAX_BUCKETS,
)
: chartConfig.granularity;
const intervalInSeconds = convertGranularityToSeconds(interval ?? '');
@ -1076,7 +1085,10 @@ async function translateMetricChartConfig(
includedDataInterval:
chartConfig.granularity === 'auto' &&
Array.isArray(chartConfig.dateRange)
? convertDateRangeToGranularityString(chartConfig.dateRange, 60)
? convertDateRangeToGranularityString(
chartConfig.dateRange,
DEFAULT_AUTO_GRANULARITY_MAX_BUCKETS,
)
: chartConfig.granularity,
},
metadata,
@ -1190,7 +1202,10 @@ async function translateMetricChartConfig(
includedDataInterval:
chartConfig.granularity === 'auto' &&
Array.isArray(chartConfig.dateRange)
? convertDateRangeToGranularityString(chartConfig.dateRange, 60)
? convertDateRangeToGranularityString(
chartConfig.dateRange,
DEFAULT_AUTO_GRANULARITY_MAX_BUCKETS,
)
: chartConfig.granularity,
} as ChartConfigWithOptDateRangeEx;