Dynamic module resolution causes issues in builds that use tools like esbuild; resulting paths are often not the same as they would in the source tree. Instead we can simplify the loading logic to use a defined object map of names to constructor functions. The mapping object should be small enough that merge conflicts with forks should be easy to resolve.
Closes HDX-2481
# Summary
For browser-based queries, the clickhouse requests go through the clickhouse-proxy endpoint, which re-writes the user-agent header for analytics purposes.
Alerts queries don't go through the clickhouse-proxy, and clickhouse-js doesn't allow us to directly set the user-agent header. We can instead provide an `application` name, which is pre-pended to the default clickhouse-js user-agent, yielding a user-agent like:
```
hyperdx-alerts 2.5.0 clickhouse-js/1.12.1 (lv:nodejs/v22.19.0; os:darwin)
```
The user agent shows up in ClickHouse query logs:
<img width="607" height="279" alt="Screenshot 2025-09-25 at 1 27 36 PM" src="https://github.com/user-attachments/assets/8098648d-9245-42c5-a41c-d7a58186ad68" />
Optimize the query performance of the getMapKeys method to prevent excessive resource usage in ClickHouse, even when max_rows_to_read is specified.
Ref: HDX-2411
Ref: HDX-2431
When the searching row limits is set very high (ex the max of 100k) the app quickly consumes all available memory and crashes.
This adds some improvements to help mitigate the problem:
1. **QueryKey Issues** - The `queryKey` is generating a ton of extra entries every time the `processedRows` changes (which is every 5s when in live mode). The queryKey and result is cached regardless of if enabled is true or false. The base hashFn strategy is to [stringify the objects](2a00fb6504/packages/query-core/src/utils.ts (L216-L217)) which creates a very large string to be stored in memory. I tried to fix this by providing a custom `queryKeyHashFn` to `useQuery` but it was too slow, and the faster browser based hashing fns return a promise which isn't supported by `useQuery` at this time. The easiest solution I found was to short circuit the hash generation if we are not denoising.
2. **Sync `gcTime`** - We already set `gcTime` in `useOffsetPaginatedQuery` so I added that field here too, this helps keep the memory usage lower while denoising rows (but the memory still is much higher).
**The app still uses very high memory usage, just from the sheer number of rows being captured and processed**, but it doesn't crash anymore. There is definitely further optimizations we could make to reduce this. One solution that comes to mind is storing a hash/unique id of each row server side before sending to the client, then our app can leverage this key instead of a stringified object.
Before (after 1 min):
<img width="645" height="220" alt="Screenshot 2025-09-17 at 4 05 59 PM" src="https://github.com/user-attachments/assets/dab0ba34-4e92-42ce-90a0-fefadd9f0556" />
After (after 5 mins):
<img width="1887" height="940" alt="Screenshot 2025-09-17 at 3 52 23 PM" src="https://github.com/user-attachments/assets/bd969d2a-f0ec-4a5a-9858-409ff4a1eaa1" />
Fixes: HDX-2409
HDX-2078
This PR shifts the creation of the ClickHouse client used by alerts to the AlertProvider interface, to support other auth methods in other AlertProvider implementations.
Running locally, the default provider creates the client and successfully queries with it:
<img width="1716" height="206" alt="Screenshot 2025-09-18 at 3 58 31 PM" src="https://github.com/user-attachments/assets/971a633f-6ddd-42ca-be70-19e303573938" />
Closes HDX-2281
This PR adds two additional functions to the `AlertProvider` interface, and implements them for the default provider. The intention behind these changes is to eliminate all direct Mongo access from the alert check loop, and instead handle the connection to Mongo through the `AlertProvider` interface.
The new functions are as follows:
1. `getWebhooks(teamId): Map<string, IWebhook>`: This function is used to retrieve the webhooks that may be used to send alerts. While it would be nice to just attach the webhook information to the AlertDetails, we have (unfinished?) support for referencing arbitrary wehbooks within message templates, and this pattern better supports that future use-case without having to process message templates while loading alerts.
3. `updateAlertState(AlertHistory)`: This function is used to update the state of an Alert document and save the given AlertHistory document.
The AlertDetails and AlertTask interfaces have also been updated to reduce the number of parameters passed to some functions while still shifting Mongo accesses to the AlertProvider:
1. AlertDetails now includes the `previous` AlertHistory value
2. AlertTask now includes the value of `now`
Across the app, we are inconsistent with when we can open the sidebar and expand functionality. This is because the sidebar and logic was managed by the parent component.
Additionally, the expand logic was set to assume a certain structure that some places in the application could not support (ex clickhouse dashboard doesn't have a 'source').
As a result, I have created the `DBSqlRowTableWithSideBar` component which will manage a lot of the common use cases for us. This PR introduces that new component, and updates all references (that could be easily upgraded) to use the new component when applciable.
The result: a lot less duplicate code (see # of lines removed) and the ability to more easily maintain the components down the road.
This PR also fixes several bugs I found as I tested these flows, especially around sidebars opening subpanels.
Fixes: HDX-2341
it could fail locally depending on if the .d.ts files in common-utils were generated, since common-utils uses aliases and tsup can't generate these aliases, we turn off skipLibCheck ([which is recommended](https://www.typescriptlang.org/tsconfig/#skipLibCheck)) and those files are not type checked, those ignoring the alias issue but still type checking the code)
**Note**: This setting is already enabled in `app`, so we are just syncing it to `api` and `common-utils`
original ticket: https://github.com/hyperdxio/hyperdx/pull/1159
Closes HDX-2407
This PR fixes searches which sort in ascending time order or do not order based on time. With this change, time-based "chunking" of queries (#1125) will only be used when the results are ordered by time. Further, when ordering by time _ascending_, the chunks will load in ascending time order (rather than descending time order).