mirror of
https://github.com/fleetdm/fleet
synced 2026-04-21 13:37:30 +00:00
<!-- Add the related story/sub-task/bug number, like Resolves #123, or remove if NA --> **Related issue:** Resolves #35603 # Details This PR aims to optimize the system for recording scheduled query results in the database. Previously, each time a result set was received from a host, the Fleet server would count all of the current result rows in the db for that query before deciding whether to save more. This count becomes more expensive as the DB size grows, until it becomes the "long" pole in the recording process. With this PR, the system changes in the following ways: * When result rows are received from the host, no count is immediately taken. Instead, a Redis key is checked which holds a current approximate count of rows in the table. If the count is over the configured row limit, no rows are saved. Otherwise, rows are saved and the count is adjusted accordingly (it can go down, e.g. if a host previously returned 5 rows for a query and now returns 3). Keep in mind that we only store one set of results per host for a scheduled query; when a host reports results for a query, we delete that hosts previous results and write the new ones if there's room. * As an additional failsafe against runaway queries, if a result set contains more than 1000 rows, it is rejected. * Once a minute, a cron job runs which deletes all rows over the limit for each query and resets the counter for all queries to the actual # of rows in the table. The end result is: * No more expensive counts on every distributed write request for scheduled queries * Results for a single query can burst to over the limit for a short time, but will get cleaned up after a minute * Because of concurrency and race issues where multiple hosts might get the same count from Redis before inserting rows, the actual # of results in the db can burst higher than the limit. In testing w/ osquery-perf with 1000 hosts started simultaneously, sending 500 rows at a time, a 50,000 row limit and a query running every 10 seconds, I saw the table get up to 60,000 rows at times before being cleaned up. This is a very bad case; in the real world we'd have a lot more jitter in the reporting, and queries would not typically return this many rows. # Checklist for submitter If some of the following don't apply, delete the relevant line. - [X] Changes file added for user-visible changes in `changes/`, `orbit/changes/` or `ee/fleetd-chrome/changes`. See [Changes files](https://github.com/fleetdm/fleet/blob/main/docs/Contributing/guides/committing-changes.md#changes-files) for more information. - [X] Input data is properly validated, `SELECT *` is avoided, SQL injection is prevented (using placeholders for values in statements) ## Testing - [X] Added/updated automated tests Added a new test to verify that results are still discarded if table size is > limit, updated existing tests. - [X] Where appropriate, [automated tests simulate multiple hosts and test for host isolation](https://github.com/fleetdm/fleet/blob/main/docs/Contributing/reference/patterns-backend.md#unit-testing) (updates to one hosts's records do not affect another) - [X] QA'd all new/changed functionality manually Ran osquery-perf with 1000 hosts and a 50,000 row limit per query, using queries that returned 1, 500 and 1000 rows at a time. Verified that the limits were respected (subject to the amount of flex discussed above). I'm doing some A/B tests now using local MySQL metrics and will report back. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Automated periodic cleanup of excess query results to retain recent data and free storage * Redis-backed query result counting to track per-query result volumes * **Performance Improvements** * Optimized recording of scheduled query results for reduced overhead * Cleanup runs in configurable batches to lower database contention and balance storage use <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai --> |
||
|---|---|---|
| .. | ||
| live_query_mock.go | ||