<!-- Add the related story/sub-task/bug number, like Resolves #123, or remove if NA --> **Related issue:** Resolves # # Checklist for submitter If some of the following don't apply, delete the relevant line. - [x] Changes file added for user-visible changes in `changes/`, `orbit/changes/` or `ee/fleetd-chrome/changes`. See [Changes files](https://github.com/fleetdm/fleet/blob/main/docs/Contributing/guides/committing-changes.md#changes-files) for more information. - [x] Input data is properly validated, `SELECT *` is avoided, SQL injection is prevented (using placeholders for values in statements) - [x] If paths of existing endpoints are modified without backwards compatibility, checked the frontend/CLI for any necessary changes ## Testing - [x] Added/updated automated tests - [x] Where appropriate, [automated tests simulate multiple hosts and test for host isolation](https://github.com/fleetdm/fleet/blob/main/docs/Contributing/reference/patterns-backend.md#unit-testing) (updates to one hosts's records do not affect another) - [x] QA'd all new/changed functionality manually For unreleased bug fixes in a release candidate, one of: - [x] Confirmed that the fix is not expected to adversely impact load test results - [x] Alerted the release DRI if additional load testing is needed ## Database migrations - [x] Checked schema for all modified table for columns that will auto-update timestamps during migration. - [x] Confirmed that updating the timestamps is acceptable, and will not cause unwanted side effects. - [x] Ensured the correct collation is explicitly set for character columns (`COLLATE utf8mb4_unicode_ci`). ## New Fleet configuration settings - [x] Setting(s) is/are explicitly excluded from GitOps If you didn't check the box above, follow this checklist for GitOps-enabled settings: - [x] Verified that the setting is exported via `fleetctl generate-gitops` - [x] Verified the setting is documented in a separate PR to [the GitOps documentation](https://github.com/fleetdm/fleet/blob/main/docs/Configuration/yaml-files.md#L485) - [x] Verified that the setting is cleared on the server if it is not supplied in a YAML file (or that it is documented as being optional) - [x] Verified that any relevant UI is disabled when GitOps mode is enabled ## fleetd/orbit/Fleet Desktop - [x] Verified compatibility with the latest released version of Fleet (see [Must rule](https://github.com/fleetdm/fleet/blob/main/docs/Contributing/workflows/fleetd-development-and-release-strategy.md)) - [x] If the change applies to only one platform, confirmed that `runtime.GOOS` is used as needed to isolate changes - [x] Verified that fleetd runs on macOS, Linux and Windows - [x] Verified auto-update works from the released version of component to the new version (see [tools/tuf/test](../tools/tuf/test/README.md))
8.5 KiB
Log destinations
Log destinations can be used in Fleet to log:
- Schedule query result logs
- Fleet audit logs
- Status logs from osquery
By default, logs are stored in the local filesystem on each host.
To configure an external log destination, you must set the correct logging configuration options in Fleet. Currently, only self-hosted users can modify this configuration. If you're a managed-cloud customer, please reach out to Fleet about modifying the configuration.
Amazon Kinesis Data Firehose
Logs are written to Amazon Kinesis Data Firehose (Firehose).
- Plugin name:
firehose - Flag namespace: firehose
This is a very good method for aggregating osquery logs into Amazon S3.
Note that Firehose logging has limits discussed in the documentation. When Fleet encounters logs that are too big for Firehose, notifications will be output in the Fleet logs and those logs will not be sent to Firehose.
Webhook
See webhook configuration docs
Snowflake
To send logs to Snowflake, you must first configure Fleet to send logs to Amazon Kinesis Data Firehose (Firehose). This is because you'll use the Snowflake Snowpipe integration to direct logs to Snowflake.
If you're using Fleet's best practice Terraform, Firehose is already configured as your log destination.
With Fleet configured to send logs to Firehose, you then want to load the data from Firehose into a Snowflake database. AWS provides instructions on how to direct logs to a Snowflake database here in the AWS documentation
Snowflake provides instructions on setting up the destination tables and IAM roles required in AWS here in the Snowflake docs.
Splunk
How to send logs to Splunk:
-
Follow Splunk's instructions to prepare Splunk for Firehose data.
-
Follow these AWS instructions on how to enable Firehose to forward directly to Splunk.
-
In your
main.tffile, replace your S3 destination (aws_kinesis_firehose_delivery_stream) with a Splunk destination:
resource "aws_kinesis_firehose_delivery_stream" "test_stream" {
name = "terraform-kinesis-firehose-test-stream"
destination = "splunk"
splunk_configuration {
hec_endpoint = "https://http-inputs-mydomain.splunkcloud.com:443"
hec_token = "51D4DA16-C61B-4F5F-8EC7-ED4301342A4A"
hec_acknowledgment_timeout = 600
hec_endpoint_type = "Event"
s3_backup_mode = "FailedEventsOnly"
s3_configuration {
role_arn = aws_iam_role.firehose.arn
bucket_arn = aws_s3_bucket.bucket.arn
buffering_size = 10
buffering_interval = 400
compression_format = "GZIP"
}
}
}
For the latest configuration go to HashiCorp's Terraform docs.
Amazon Kinesis Data Streams
Logs are written to Amazon Kinesis Data Streams (Kinesis).
- Plugin name:
kinesis - Flag namespace: kinesis
Note that Kinesis logging has limits discussed in the documentation. When Fleet encounters logs that are too big for Kinesis, notifications appear in the Fleet server logs. Those logs will not be sent to Kinesis.
AWS Lambda
Logs are written to AWS Lambda (Lambda).
- Plugin name:
lambda - Flag namespace: lambda
Lambda processes logs from Fleet synchronously, so the Lambda function used must not take enough processing time that the osquery client times out while writing logs. If there is heavy processing to be done, use Lambda to store the logs in another datastore/queue before performing the long-running process.
Note that Lambda logging has limits discussed in the documentation. The maximum size of a log sent to Lambda is 6MB. When Fleet encounters logs that are too big for Lambda, notifications will be output in the Fleet logs and those logs will not be sent to Lambda.
Lambda is executed once per log line. As a result, queries with differential result logging might result in a higher number of Lambda invocations.
Queries are assigned
differentialresult logging by default in Fleet.differentiallogs have two format options, single (event) and batched. Check out the osquery documentation for more information ondifferentiallogs.
Keep this in mind when using Lambda, as you're charged based on the number of requests for your functions and the duration, the time it takes for your code to execute.
Google Cloud Pub/Sub
Logs are written to Google Cloud Pub/Sub (Pub/Sub).
- Plugin name:
pubsub - Flag namespace: pubsub
Messages over 10MB will be dropped, with a notification sent to the Fleet logs, as these can never be processed by Pub/Sub.
Apache Kafka
Logs are written to Apache Kafka (Kafka) using the Kafka REST proxy.
- Plugin name:
kafkarest - Flag namespace: kafka
Note that the REST proxy must be in place in order to send osquery logs to Kafka topics.
Stdout
Logs are written to stdout.
- Plugin name:
stdout - Flag namespace: stdout
With the stdout plugin, logs are written to stdout on the Fleet server. This is typically used for debugging or with a log forwarding setup that will capture and forward stdout logs into a logging pipeline.
Note that if multiple load-balanced Fleet servers are used, the logs will be load-balanced across those servers (not duplicated).
Filesystem
Logs are written to the local Fleet server filesystem.
The default log destination.
- Plugin name:
filesystem - Flag namespace: filesystem
With the filesystem plugin, logs are written to the local filesystem on the Fleet server. This is typically used with a log forwarding agent on the Fleet server that will push the logs into a logging pipeline.
Note that if multiple load-balanced Fleet servers are used, the logs will be load-balanced across those servers (not duplicated).
Sending logs outside of Fleet
Osquery agents are typically configured to send logs to the Fleet server (--logger_plugin=tls). This is not a requirement, and any other logger plugin can be used even when osquery clients are connecting to the Fleet server to retrieve configuration or run live queries.
See the osquery logging documentation for more about configuring logging on the agent.
If --logger_plugin=tls is used with osquery clients, the following configuration can be applied on the Fleet server for handling the incoming logs.