fleet/infrastructure/loadtesting/terraform/infra
Jorge Falcon 2d09916f60
Fix loadtest/infra docker_image resource (#42537)
<!-- Add the related story/sub-task/bug number, like Resolves #123, or
remove if NA -->
**Related issue:** Resolves # N/A

- Resolves an issue that prevents some locally pulled docker images from
being pushed to ECR.
2026-03-27 01:17:37 -04:00
..
template Configure software_installers defaults in Loadtest terraform (#41207) 2026-03-19 20:17:54 -04:00
.header.md Loadtesting documentation - Removes (Coming Soon) from README (#35649) 2025-11-12 16:54:14 -05:00
.terraform-docs.yml Loadtesting IAC updates (#32629) 2025-10-08 15:31:37 -04:00
docker.tf Fix loadtest/infra docker_image resource (#42537) 2026-03-27 01:17:37 -04:00
iam.tf Loadtesting IAC updates (#32629) 2025-10-08 15:31:37 -04:00
internal_alb.tf Infra changes after latest loadtest (#35083) 2025-11-03 11:02:15 -06:00
kms.tf Loadtesting IAC updates (#32629) 2025-10-08 15:31:37 -04:00
locals.tf Dogfood and loadtest - mysql require secure transport on (#40211) 2026-02-20 15:57:10 -05:00
main.tf Dogfood & Loadtest - Updating mysql engine version to 8.0.mysql_aurora.3.10.3 (#42120) 2026-03-19 21:05:24 -05:00
mdm.tf Loadtesting IAC updates (#32629) 2025-10-08 15:31:37 -04:00
outputs.tf Infra changes after latest loadtest (#35083) 2025-11-03 11:02:15 -06:00
providers.tf Ensure terraform docker compatibility with github actions (#39988) 2026-02-17 15:09:50 -05:00
README.md Adding changes for Fleet v4.76.1 (#35760) 2025-11-18 14:35:31 -06:00
secrets.tf Loadtesting IAC updates (#32629) 2025-10-08 15:31:37 -04:00
signoz.tf Fix issues with OTEL SigNoz deployments for loadtests (#34694) 2025-10-23 12:49:36 -05:00
tls.tf Loadtesting IAC updates (#32629) 2025-10-08 15:31:37 -04:00
variables.tf Adding changes for Fleet v4.76.1 (#35760) 2025-11-18 14:35:31 -06:00

Deploy Loadtesting Infrastructure

Before we begin

Although deployments through the github action should be prioritized, for manual deployments you will need.

  • Terraform v1.10.2
  • Docker
  • Go

Additionally, refer to the Reference Architecture sizing recommendations for loadtest infrastructure sizing.

Deploy with Github Actions

Deploy/Destroy environment with Github Action

  1. Navigate to the github action

  2. On the top right corner, select the Run Workflow dropdown.

  3. Fill out the details for the deployment.

  4. After all details have been filled out, you will hit the green Run Workflow button, directly under the inputs. For terraform_action select Plan, Apply, or Destroy.

    • Plan will show you the results of a dry-run
    • Apply will deploy changes to the environment
    • Destroy will destroy your environment

Deploy environment manually

  1. Clone the repository

  2. Initialize terraform

    terraform init
    
  3. Create a new the terraform workspace or select an existing workspace for your environment. The terraform workspace will be used in different area's of Terraform to drive uniqueness and access to the environment.

    terraform workspace new <workspace_name>
    

    or, if your workspace already exists

    terraform workspace list
    terraform workspace select <workspace_name>
    
  4. Ensure that your new or existing workspace is in use.

    terraform workspace show
    
  5. Deploy the environment (will also trigger migrations automatically)

    Note: Terraform will prompt you for confirmation to trigger the deployment. If everything looks ok, submitting yes will trigger the deployment.

    terraform apply -var=tag=v4.72.0
    

    or, you can add the additional supported terraform variables, to overwrite the default values. You can choose which ones are included/overwritten. If a variable is not defined, the default value configured in ./variables.tf is used.

    Below is an example with all available variables.

    terraform apply -var=tag=v4.72.0 -var=fleet_task_count=20 -var=fleet_task_memory=4096 -var=fleet_task_cpu=512 -var=database_instance_size=db.t4g.large -var=database_instance_count=3 -var=redis_instance_size=cache.t4g.small -var=redis_instance_count=3 -var=enable_otel=true
    

OpenTelemetry tracing with SigNoz

By default, the loadtest environment uses Elastic APM. You can optionally use OpenTelemetry with SigNoz instead by setting enable_otel=true:

terraform apply -var=tag=v4.72.0 -var=enable_otel=true

This deploys both Fleet and SigNoz in a single command. See ../signoz/README.md for architecture details.

Accessing the SigNoz UI

After deploying with enable_otel=true, get the SigNoz UI URL:

$(terraform output -raw signoz_configure_kubectl) && kubectl get svc signoz -n signoz -o jsonpath='http://{.status.loadBalancer.ingress[0].hostname}:8080'

Destroy environment manually

  1. Clone the repository (if not already cloned)

  2. Initialize terraform

    terraform init
    
  3. Select your workspace

    terraform workspace list
    terraform workspace select <workspace_name>
    
  4. Destroy the environment

    terraform destroy
    

Delete the workspace

Once all resources have been removed from the terraform workspace, remove the terraform workspace.

terraform workspace delete <workspace_name>

Requirements

Name Version
aws >= 5.68.0
docker ~> 2.16.0
git 2025.10.10

Providers

Name Version
aws 6.21.0
docker 2.16.0
git 2025.10.10
random 3.7.2
terraform n/a
tls 4.1.0

Modules

Name Source Version
acm terraform-aws-modules/acm/aws 4.3.1
loadtest github.com/fleetdm/fleet-terraform//byo-vpc tf-mod-root-v1.18.3
logging_alb github.com/fleetdm/fleet-terraform//addons/logging-alb tf-mod-addon-logging-alb-v1.6.2
mdm github.com/fleetdm/fleet-terraform/addons/mdm?depth=1&ref=tf-mod-addon-mdm-v2.0.0 n/a
migrations github.com/fleetdm/fleet-terraform//addons/migrations tf-mod-addon-migrations-v2.2.1
osquery-carve github.com/fleetdm/fleet-terraform//addons/osquery-carve tf-mod-addon-osquery-carve-v1.1.1
ses github.com/fleetdm/fleet-terraform//addons/ses tf-mod-addon-ses-v1.4.0
vuln-processing github.com/fleetdm/fleet-terraform//addons/external-vuln-scans tf-mod-addon-external-vuln-scans-v2.3.0

Resources

Name Type
aws_ecr_repository.fleet resource
aws_iam_policy.enroll resource
aws_iam_policy.license resource
aws_iam_role_policy_attachment.enroll resource
aws_kms_alias.alias resource
aws_kms_key.customer_data_key resource
aws_kms_key.main resource
aws_lb.internal resource
aws_lb_listener.internal resource
aws_lb_target_group.internal resource
aws_route53_record.main resource
aws_secretsmanager_secret_version.scep resource
aws_security_group.internal resource
docker_registry_image.fleet resource
random_password.challenge resource
random_pet.db_secret_postfix resource
tls_private_key.cloudfront_key resource
tls_private_key.scep_key resource
tls_self_signed_cert.scep_cert resource
aws_acm_certificate.certificate data source
aws_caller_identity.current data source
aws_ecr_authorization_token.token data source
aws_iam_policy_document.enroll data source
aws_iam_policy_document.license data source
aws_region.current data source
aws_route53_zone.main data source
aws_secretsmanager_secret.license data source
aws_secretsmanager_secret_version.enroll_secret data source
docker_registry_image.dockerhub data source
git_repository.tf data source
terraform_remote_state.shared data source
terraform_remote_state.signoz data source

Inputs

Name Description Type Default Required
database_instance_count The number of Aurora database instances number 2 no
database_instance_size The instance size for Aurora database instances string "db.t4g.medium" no
enable_otel Enable OpenTelemetry tracing with SigNoz instead of Elastic APM bool false no
fleet_task_count The total number (max) that ECS can scale Fleet containers up to number 5 no
fleet_task_cpu The CPU configuration for Fleet containers number 512 no
fleet_task_memory The memory configuration for Fleet containers number 4096 no
redis_instance_count The number of Elasticache nodes number 3 no
redis_instance_size The instance size for Elasticache nodes string "cache.t4g.micro" no
tag The tag to deploy. This would be the same as the branch name string "v4.76.1" no

Outputs

Name Description
ecs_arn n/a
ecs_cluster n/a
ecs_execution_arn n/a
enroll_secret_arn n/a
internal_alb_dns_name n/a
kms_key_id n/a
logging_config n/a
security_groups n/a
server_url n/a
signoz_cluster_name SigNoz EKS cluster name
signoz_configure_kubectl Command to configure kubectl for SigNoz
signoz_otel_collector_endpoint Internal OTLP collector endpoint for Fleet
vpc_subnets VPC private subnets from shared fleet-vpc