Merge branch 'main' into 24755-default-fleet-for-windows
Some checks are pending
Build, Sign and Notarize Orbit for macOS / build (push) Waiting to run

Resolve merge conflicts in 5 files by keeping both sides:
- generate_gitops.go: keep GetWindowsMDMDefaultTeam + GetVPPTokens
- generate_gitops_test.go: keep both mock methods
- ee/server/service/mdm.go: keep WindowsMDMDefaultTeam + ClearPasscode methods
- microsoft_mdm_test.go: keep all test case entries and functions
- schema.sql: regenerated via make test-schema

Bump migration 20260316120000 -> 20260420120000 to resolve timestamp
collision with main's DropKernelHostCountsForeignKey migration.
This commit is contained in:
Josh Roskos 2026-04-20 08:31:56 -05:00
commit ce3f58127a
3023 changed files with 320336 additions and 119894 deletions

View file

@ -1,15 +1,120 @@
## Running Tests
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## About Fleet
Fleet is an open-source platform for IT and security teams: device management (MDM), vulnerability reporting, osquery fleet management, and security monitoring. Go backend, React/TypeScript frontend, manages thousands of devices across macOS, Windows, Linux, iOS, iPadOS, Android, and ChromeOS.
## Architecture
### Backend request flow
HTTP request → `server/service/handler.go` routes → endpoint function (decode request) → service method (auth + business logic) → datastore method (SQL) → response struct
### Key layers
- **Types & interfaces**: `server/fleet/``Service` in `service.go`, `Datastore` in `datastore.go`
- **Service implementations**: `server/service/` — business logic, auth checks
- **Datastore (MySQL)**: `server/datastore/mysql/` — SQL queries, migrations
- **Enterprise features**: `ee/server/service/` — wraps core service with license checks
- **MDM**: `server/mdm/` — Apple, Microsoft, Android device management
- **Frontend**: `frontend/pages/` (routes), `frontend/components/` (reusable UI), `frontend/services/` (API client)
- **CLI tools**: `cmd/fleet/` (server), `cmd/fleetctl/` (management CLI), `orbit/` (agent)
### Enterprise vs core
- Core features: no special build tags, available in all deployments
- Enterprise features: in `ee/` directory, license checks at service layer
- Use `//go:build !premium` for core-only features when needed
## Terminology
The following terms were recently renamed. Use the new terms in conversation and new code, but don't rename existing variables or API parameters without guidance:
- **"Teams" → "Fleets"** — the concept of grouping hosts. Legacy code still uses `team_id`, `teams` table, etc.
- **"Queries" → "Reports"** — what was formerly a "query" in the product is now a "report." The word "query" now refers solely to a SQL query, which is one aspect of a report.
## Fleet-specific patterns
### Go backend
- **Error wrapping**: `ctxerr.Wrap(ctx, err, "description")` — never pkg/errors
- **Request/Response**: lowercase struct types, `Err error` field, `Error()` method returning `r.Err`
- **Endpoint registration**: `ue.POST("/api/_version_/fleet/resource", fn, reqType{})`
- **Authorization**: `svc.authz.Authorize(ctx, entity, fleet.ActionX)` at start of service methods
- **Logging**: slog with `DebugContext/InfoContext/WarnContext/ErrorContext` — never bare slog.Debug/Info/Warn/Error
- **Pointers**: Use Go 1.26 `new(expression)` for pointer values (e.g., `new("value")`, `new(true)`, `new(42)`). Do NOT use the legacy `server/ptr` package in new code — it exists throughout the codebase but is superseded by `new(expr)`.
- **Reference example**: `server/service/vulnerabilities.go`
## Before writing a fix
- Identify WHERE in the request lifecycle the problem manifests (creation vs team-addition vs sync vs query). Fix it there, not at the reproduction step.
- Read the surrounding 100 lines. If similar checks exist nearby, follow their pattern exactly.
- If an endpoint has zero DB interaction, that's intentional. Adding DB calls needs justification.
- Cover ALL entry points for the same operation (single add, batch/GitOps, etc.).
- For declarative/batch endpoints, validate within the incoming payload, not against the DB.
- When checking for duplicates, exclude the current entity to avoid false conflicts on upserts.
- Run `go test ./server/service/` after adding new datastore interface methods — uninitialized mocks crash other tests.
## Development commands
Check the `Makefile` for the full list of available targets. Key ones below.
### Building and running
```bash
# Quick Go tests (no external deps)
go test ./server/fleet/...
make build # Build fleet + fleetctl
make serve # Start dev server (or: make up)
make generate-dev # Webpack watch mode for frontend dev
make deps # Install dependencies
```
# Integration tests
MYSQL_TEST=1 go test ./server/datastore/mysql/...
MYSQL_TEST=1 REDIS_TEST=1 go test ./server/service/...
### Testing
```bash
go test ./server/fleet/... # Quick (no external deps)
MYSQL_TEST=1 go test ./server/datastore/mysql/... # MySQL integration
MYSQL_TEST=1 REDIS_TEST=1 go test ./server/service/... # Service integration
MYSQL_TEST=1 go test -run TestFunctionName ./server/datastore/mysql/... # Specific test
yarn test # Frontend Jest tests
```
# Run a specific test
MYSQL_TEST=1 go test -run TestFunctionName ./server/datastore/mysql/...
### Linting
```bash
make lint-go-incremental # Go — ONLY changes since branching from main (use after editing)
make lint-go # Go — full (use before committing)
make lint-js # JS/TS linters
```
# Generate boilerplate for a new frontend component, including associated stylesheet, tests, and storybook
./frontend/components/generate -n RequiredPascalCaseNameOfTheComponent -p optional/path/to/desired/parent/directory
### Database
```bash
make migration name=CamelCaseName # Create new migration
make db-reset # Reset dev database
```
### CI test bundles
| Bundle | Packages | Env vars |
|--------|----------|----------|
| `fast` | No external deps | none |
| `mysql` | `server/datastore/mysql/...` | `MYSQL_TEST=1` |
| `service` | `server/service/` (unit) | `MYSQL_TEST=1 REDIS_TEST=1` |
| `integration-core` | `server/service/integration_*_test.go` | `MYSQL_TEST=1 REDIS_TEST=1` |
| `integration-enterprise` | `ee/server/service/integration_*_test.go` | `MYSQL_TEST=1 REDIS_TEST=1` |
| `integration-mdm` | MDM integration tests | `MYSQL_TEST=1 REDIS_TEST=1` |
| `fleetctl` | `cmd/fleetctl/...` | varies |
| `vuln` | `server/vulnerabilities/...` | varies |
| `main` | Everything else | varies |
## Skills and agents
Type `/` to see available skills. Key ones: `/test`, `/lint`, `/review-pr`, `/fix-ci`, `/spec-story`, `/new-endpoint`, `/new-migration`, `/bump-migration`, `/project`, `/fleet-gitops`, `/find-related-tests`.
Agents: **go-reviewer** (proactive after Go edits), **frontend-reviewer** (proactive after TS edits), **fleet-security-auditor** (on-demand for auth/MDM/security).
## Documentation
All Fleet documentation lives in this repo. Check these sources before searching the web:
- **`docs/`** — User-facing docs: feature guides, REST API reference, configuration, deployment, contributing
- **`handbook/`** — Internal procedures: engineering practices, company policies, product design
- **`articles/`** — Blog posts and tutorials
## Other references
- Linter config: `.golangci.yml`
- Activity types: `docs/Contributing/reference/audit-logs.md`
- Claude Code setup: `.claude/README.md`

482
.claude/README.md Normal file
View file

@ -0,0 +1,482 @@
# Fleet Claude Code configuration
This directory contains team-shared [Claude Code](https://claude.ai/code) configuration for the Fleet project. Everything here works out of the box with no MCP servers, plugins, or external dependencies required. The full setup adds ~2,500 tokens at startup — rules, skill bodies, and agent bodies only load on demand.
This setup is a starting point. You can customize it by creating `.claude/settings.local.json` (gitignored) to add your own permissions, MCP servers, and plugins. See [Customize your setup](#customize-your-setup) for details.
If you're new to Claude Code, start with the [primer](#claude-code-primer) below. If you already know Claude Code, skip to [what's here](#whats-here).
### Try it on your branch
To test this setup without switching branches, pull the `.claude/` folder into your current working branch:
```bash
# Add the configuration to your branch
git checkout origin/cc-setup-teamwide -- .claude/
# Start a Claude Code session and work normally (use --debug to see hooks firing)
claude --debug
# When you're done testing, fully remove it so nothing ends up in your PR
git checkout -- .claude/
git clean -fd .claude/
```
This drops the full setup (rules, skills, agents, hooks, and permissions) into your working tree. Start a new Claude Code session and everything loads automatically. When you're done, the second command reverts `.claude/` to whatever's on your branch.
To troubleshoot hooks or see exactly what's firing, start with `claude --debug`. Check the debug log at `~/.claude/debug/` for detailed hook and tool execution traces.
### Not covered by this configuration
The following areas have their own conventions and aren't covered by the current rules, hooks, or skills:
- **`website/`** — Fleet marketing website (Sails.js, separate `package.json` and conventions)
- **`ee/fleetd-chrome/`** — Chrome extension for ChromeOS (TypeScript, separate test setup)
- **`ee/vulnerability-dashboard/`** — Vulnerability dashboard (Sails.js/Grunt, legacy patterns)
- **`android/`** — Android app (Kotlin/Gradle, separate build system)
- **`third_party/`** — Forked external code (not Fleet's conventions)
- **Documentation** — Guides, API docs, and handbook documentation workflows
- **Fleet-maintained apps (FMA)** — FMA catalog workflows, maintained-app packaging, and `ee/maintained-apps/` conventions
- **MDM-specific patterns**`server/mdm/` has complex multi-platform patterns (Apple, Windows, Android) beyond what the Go backend rule covers
---
## Claude Code primer
Claude Code is an AI coding assistant that runs in your terminal, VS Code, JetBrains, desktop app, or browser. It reads your codebase, writes code, runs commands, and understands project context through configuration files like the ones in this directory.
### Core concepts
**CLAUDE.md** — Project instructions loaded at session start, like a `.editorconfig` for AI. Claude reads these automatically to understand your project's conventions, architecture, and workflows. There can be multiple: root-level, `.claude/CLAUDE.md`, and user-level `~/.claude/CLAUDE.md`.
**Skills** — Reusable workflows invoked with `/` (e.g., `/test`, `/fix-ci`). Each skill is a `SKILL.md` file with YAML frontmatter that controls when it triggers, which tools it can use, and whether it runs in an isolated context. Skills replace the older `.claude/commands/` format, adding auto-invocation, tool restrictions, and isolated execution.
**Agents (subagents)** — Specialized AI assistants that run in isolated contexts with their own tools and model. Claude can delegate to them automatically (if their description includes "PROACTIVELY") or you can invoke them by name.
**Rules** — Coding conventions that auto-apply based on file paths. When you edit a `.go` file, Go rules load automatically. When you edit `.tsx`, frontend rules load.
**Hooks** — Shell scripts that run automatically on events like editing files (`PostToolUse`) or before running a tool (`PreToolUse`). Our hooks auto-format Go and TypeScript files on every edit.
**MCP servers** — External tool integrations via the Model Context Protocol. Connect Claude to GitHub, databases, documentation search, and other services. These aren't required for the team setup but can enhance your personal workflow.
**Plugins** — Bundled packages of skills, agents, hooks, and MCP configs from the Claude Code marketplace. Like MCP servers, these are optional personal enhancements.
**Memory** — Claude maintains auto-generated memory across sessions at `~/.claude/projects/<project>/memory/`. It remembers patterns, preferences, and lessons learned. View with `/memory`.
### Commands, shortcuts, and session management
**Sessions**
| Action | How |
|--------|-----|
| Start a session | `claude` (terminal) or open in IDE |
| Continue last session | `claude -c` or `/resume` |
| Resume a named session | `claude -r "name"` or `/resume` |
| Rename session | `/rename <name>` |
| Branch conversation | `/branch` (explore alternatives in parallel) |
| Rewind to checkpoint | `Esc` twice, or `/rewind` |
| Export session | `/export` |
| Side question | `/btw <question>` (doesn't affect conversation history) |
**Context** — The context window fills over time. Manage it actively:
| Action | How |
|--------|-----|
| Check context usage | `/context` |
| Compress conversation | `/compact` or `/compact <focus>` (e.g., `/compact keep the migration plan, drop debugging`) |
| Clear and start fresh | `/clear` |
Use `/clear` between unrelated tasks — context pollution degrades quality. Use `/compact` when context gets large. Delegate heavy investigation to subagents to keep the main context clean. Press `Esc` twice to rewind if Claude goes off track.
**Configuration and diagnostics**
| Action | How |
|--------|-----|
| Invoke a skill | Type `/` then select from menu |
| Switch model | `/model` (sonnet/opus/haiku) |
| Set effort level | `/effort` (low/medium/high) |
| Toggle extended thinking | `Option+T` (macOS) / `Alt+T` |
| Cycle permission mode | `Shift+Tab` |
| Enter plan mode | `/plan <description>` or `Shift+Tab` |
| Edit plan externally | `Ctrl+G` |
| Manage permissions | `/permissions` or `/allowed-tools` |
| Open settings | `/config` |
| View diff of changes | `/diff` |
| Check session cost | `/cost` |
| Check version and status | `/status` |
| Run installation health check | `/doctor` |
| List all commands | `/help` |
### Advanced features
**Plan mode** — Separates research from implementation. Claude explores the codebase and writes a plan for your review before making changes. Activate with `Shift+Tab`, `/plan`, or `--permission-mode plan`. Edit the plan externally with `Ctrl+G`.
**Extended thinking** — Gives Claude more reasoning time for complex problems. Toggle with `Option+T` (macOS) / `Alt+T`. Set effort level with `/effort`. Include "ultrathink" in prompts for maximum depth.
**Auto mode** — Uses a background safety classifier to auto-approve safe tool calls without prompting. Cycle to it with `Shift+Tab`. Configure trusted domains and environments in `settings.json` under `autoMode`.
**Permission modes** — A spectrum from restrictive to autonomous:
- `default` — Reads freely, prompts for writes and commands
- `acceptEdits` — Auto-approves file edits, prompts for commands
- `plan` — Read-only exploration
- `auto` — Classifier-based decisions
- `dontAsk` — Auto-denies tools unless pre-approved via `/permissions` or settings
- `bypassPermissions` — No checks (CI/CD use only)
**Headless and CI mode** — Run non-interactively with `claude -p "prompt" --output-format json`. Useful for CI pipelines, batch processing, and scripted workflows.
**Background tasks** — Long-running work continues while you chat. Skills with `context: fork` run in isolated subagents.
**Git worktrees** — Run `claude --worktree` to work in an isolated git worktree so experimental changes don't affect your working directory.
### Settings hierarchy
Settings are applied in this order (highest to lowest priority):
1. **Managed** — Organization-wide policies (IT/admin controlled)
2. **Local**`.claude/settings.local.json` (personal, gitignored)
3. **Project**`.claude/settings.json` (team-shared, checked in)
4. **User**`~/.claude/settings.json` (personal, all projects)
Your local settings override project settings, so you can always customize without affecting the team.
---
## What's here
```
.claude/
├── CLAUDE.md # Project instructions (architecture, patterns, commands)
├── settings.json # Team settings (env vars, permissions, hooks)
├── settings.local.json # Personal overrides (gitignored)
├── README.md # This file
├── rules/ # Path-scoped coding conventions (auto-applied)
│ ├── fleet-go-backend.md # Go: ctxerr, service patterns, logging, testing
│ ├── fleet-frontend.md # React/TS: components, React Query, BEM, interfaces
│ ├── fleet-database.md # MySQL: migrations, goqu, reader/writer
│ ├── fleet-api.md # API: endpoint registration, versioning, error responses
│ └── fleet-orbit.md # Orbit: agent packaging, TUF updates, platform-specific code
├── skills/ # Workflow skills (invoke with /)
│ ├── review-pr/ # /review-pr <PR#>
│ ├── fix-ci/ # /fix-ci <run-url>
│ ├── test/ # /test [filter]
│ ├── find-related-tests/ # /find-related-tests
│ ├── lint/ # /lint [go|frontend]
│ ├── fleet-gitops/ # /fleet-gitops
│ ├── project/ # /project <name>
│ ├── new-endpoint/ # /new-endpoint
│ ├── new-migration/ # /new-migration
│ ├── bump-migration/ # /bump-migration <filename>
│ ├── spec-story/ # /spec-story <issue#>
│ └── cherry-pick/ # /cherry-pick <PR#> [RC_BRANCH]
├── agents/ # Specialized AI agents
│ ├── go-reviewer.md # Go reviewer (proactive, sonnet)
│ ├── frontend-reviewer.md # Frontend reviewer (proactive, sonnet)
│ └── fleet-security-auditor.md # Security auditor (on-demand, opus)
└── hooks/ # Automated hooks
├── guard-dangerous-commands.sh # PreToolUse: blocks dangerous commands
├── goimports.sh # PostToolUse: formats Go files
├── prettier-frontend.sh # PostToolUse: formats frontend files
└── lint-on-save.sh # PostToolUse: lints Go/TS and feeds violations back to Claude
```
## Skills reference
Several skills use the `gh` CLI for GitHub operations (PR review, CI diagnosis, issue speccing). Make sure you have [`gh`](https://cli.github.com/) installed and authenticated with `gh auth login`.
| Skill | Usage | What it does |
|-------|-------|-------------|
| `/review-pr` | `/review-pr 12345` | Reviews a PR for correctness, Go idioms, SQL safety, test coverage, and Fleet conventions. Runs in isolated context. Requires `gh`. |
| `/fix-ci` | `/fix-ci https://github.com/.../runs/123` | Diagnoses CI failures in 8 steps: identifies failing suites, fetches logs, classifies failures as stale assertions vs real bugs, fixes stale assertions, and reports real bugs. Requires `gh`. |
| `/test` | `/test` or `/test TestFoo` | Detects which packages changed via `git diff` and runs their tests with the correct env vars (`MYSQL_TEST`, `REDIS_TEST`). |
| `/find-related-tests` | `/find-related-tests` | Maps changed files to their `_test.go` files, integration tests, and test helpers. Outputs exact `go test` commands. |
| `/fleet-gitops` | `/fleet-gitops` | Validates GitOps YAML: osquery queries against Fleet schema, Apple/Windows/Android profiles against upstream references, and software against the Fleet-maintained app catalog. |
| `/project` | `/project android-mdm` | Loads or creates a workstream context file in your Claude memory directory. Includes a minimal self-improvement mechanism — Claude adds discoveries, gotchas, and key file paths as you work, so each session starts with slightly richer context than the last. |
| `/new-endpoint` | `/new-endpoint` | Scaffolds a Fleet API endpoint: request/response structs, endpoint function, service method, datastore interface, handler registration, and test stubs. |
| `/new-migration` | `/new-migration` | Creates a timestamped migration file and test file with proper naming, init registration, and Up function (Down is always a no-op). |
| `/bump-migration` | `/bump-migration YYYYMMDDHHMMSS_Name.go` | Bumps a migration's timestamp to current time when it conflicts with a migration already merged to main. Renames files and updates function names in both migration and test files. |
| `/spec-story` | `/spec-story 12345` | Breaks down a GitHub story into implementable sub-issues: maps codebase impact, decomposes into atomic tasks per layer (migration/datastore/service/API/frontend), and writes specs with acceptance criteria and a dependency graph. Requires `gh`. |
| `/lint` | `/lint` or `/lint go` | Runs the appropriate linters (golangci-lint, eslint, prettier) on recently changed files. Accepts `go`, `frontend`, or a file path to narrow scope. |
| `/cherry-pick` | `/cherry-pick 43082` or `/cherry-pick 43082 rc-minor-fleet-v4.83.0` | Cherry-picks a merged PR into an RC branch. Auto-detects the latest `rc-minor-fleet-v*` or `rc-patch-fleet-v*` branch, or accepts an explicit target. Handles squash-merged and merge commits. Requires `gh`. |
### Using `/project` for workstream context
The `/project` skill builds a personal knowledge base for areas of the codebase you work in repeatedly. Use it at the start of a session to load context from previous sessions.
**First use:** `/project software` — no file exists yet, so Claude asks you to describe the workstream, explores the codebase, and creates a context file with key files, patterns, and architecture notes.
**Subsequent sessions:** `/project software` — Claude loads what it knows, summarizes it, and asks what you're working on today.
**As you work:** Claude adds useful discoveries to the project file — gotchas, important file paths, architectural decisions — so the next session starts with richer context.
**Organizing projects:** The name is just a label. Pick the scope that's most useful to you:
| Scope | Example | Good for |
|-------|---------|----------|
| By team area | `/project software`, `/project mdm` | Broad context that accumulates over time. Good if you consistently work in one area. |
| By feature | `/project patch-policies`, `/project android-enrollment` | Focused context for multi-week features. Tracks specific decisions, status, and key files. |
| By issue | `/project 35666-gitops-exceptions` | Narrow, disposable context tied to a specific piece of work. |
Project files are stored per-machine in your Claude memory directory (`~/.claude/projects/`). They're personal — not shared with the team. Context grows gradually (a few lines per session) and Claude auto-truncates at 200 lines / 25KB, so it won't run away.
## Agents reference
### go-reviewer (sonnet, proactive)
Runs automatically after Go file changes. Checks:
- Error handling (ctxerr wrapping, no swallowed errors)
- Database patterns (parameterized queries, reader/writer, and index coverage)
- API conventions (auth checks, response types, and HTTP status codes)
- Test coverage (integration tests for DB code, edge cases)
- Logging (structured slog, no print statements)
### frontend-reviewer (sonnet, proactive)
Runs automatically after TypeScript and React file changes. Checks:
- TypeScript strictness (no `any`, proper type narrowing)
- React Query patterns (query keys, `enabled` option)
- Component structure (4-file pattern, BEM naming)
- Interface consistency (`I` prefix, `frontend/interfaces/` types)
- Accessibility (ARIA attributes, keyboard navigation)
### fleet-security-auditor (opus, on-demand)
Invoke when touching auth, MDM, enrollment, or user data. Uses Opus for deeper adversarial reasoning. Checks:
- API authorization gaps (missing `svc.authz.Authorize` calls)
- MDM profile payload injection
- osquery query injection
- Team permission boundary violations
- Certificate and SCEP handling
- PII in logs, license enforcement bypass
You can add your own agents by creating files in `.claude/agents/` on a branch, or in `~/.claude/agents/` for personal agents that apply across all projects.
## Hooks
Four hooks run automatically:
| Hook | Event | Files | What it does |
|------|-------|-------|-------------|
| `guard-dangerous-commands.sh` | PreToolUse (Bash) | All commands | Blocks `rm -rf /`, force push to main/master, `git reset --hard origin/`, and pipe-to-shell attacks |
| `goimports.sh` | PostToolUse (Edit/Write) | `**/*.go` | Formats with `goimports``gofumpt``gofmt` (first available) |
| `prettier-frontend.sh` | PostToolUse (Edit/Write) | `frontend/**` | Formats with `npx prettier --write` |
| `lint-on-save.sh` | PostToolUse (Edit/Write) | `**/*.go`, `**/*.ts`, `**/*.tsx` | Auto-fixes with `golangci-lint --fix`, then runs `make lint-go-incremental` (only changes since branching from main) and feeds remaining violations back to Claude for self-correction. For TypeScript, runs `eslint --fix` then reports remaining issues. |
Hooks run in order: formatters first (goimports, prettier), then the linter. The linter is non-blocking — it doesn't reject the edit, but Claude sees the output and fixes violations in its next step. All hooks exit gracefully if the tool isn't installed. To add project-level hooks, edit `.claude/settings.json` on a branch. For personal hooks, add them to `~/.claude/settings.json`.
## Rules
Rules auto-apply when you edit files matching their path globs:
| Rule | Paths | Key conventions |
|------|-------|----------------|
| `fleet-go-backend.md` | `server/**/*.go`, `cmd/**/*.go`, `orbit/**/*.go`, `ee/**/*.go`, `pkg/**/*.go`, `tools/**/*.go`, `client/**/*.go`, `test/**/*.go` | ctxerr errors, error types, banned imports, input validation, viewer context, auth pattern, `fleethttp.NewClient()`, `new(expression)` pointers, bounded contexts, and service signatures |
| `fleet-frontend.md` | `frontend/**/*.ts`, `frontend/**/*.tsx` | React Query, component structure, BEM/SCSS, permissions utilities, team context (fleets/reports terminology), notifications, XSS prevention, and string/URL utilities |
| `fleet-database.md` | `server/datastore/**/*.go` | Migration naming and testing, goqu queries, reader/writer, transaction rules (no ds.reader/writer inside tx), parameterized SQL, and batch operations |
| `fleet-api.md` | `server/service/**/*.go` | Endpoint registration, API versioning, and error-in-response pattern |
| `fleet-orbit.md` | `orbit/**/*.go` | Agent architecture, TUF updates, platform-specific code, packaging, keystore, and security considerations |
## Permissions
`settings.json` pre-approves safe operations so you don't get prompted:
**Allowed:** `go test`, `go vet`, `go build`, `golangci-lint`, `yarn test/lint`, `npx prettier/eslint/tsc/jest`, `make test/lint/build/generate/serve/db-*/migration/deps/e2e-*`, `git status/diff/log/show/branch`, and `gh pr/issue/run/api`
**Denied:** `git push --force`, `git push -f`, `rm -rf /`, and `rm -rf ~`
Commands not in either list (like `git commit` or `git push`) will prompt for permission on first use. To pre-approve them, add them to your `.claude/settings.local.json` — see [local settings](#local-settings) below.
## Customize your setup
Everything above works without extra configuration. The sections below describe how to customize your personal experience without affecting the team.
### Model and effort
Change the model or effort level for your current session at any time:
```
/model opus # Switch to Opus for deeper reasoning
/model sonnet # Switch to Sonnet for faster responses
/effort high # More reasoning time
/effort low # Faster, lighter responses
```
Each skill in this setup has an `effort` level tuned for its complexity (e.g., `/spec-story` uses high, `/test` uses low). The skill's effort overrides your session setting while the skill is active, then reverts when it finishes.
To set your default for all sessions, add to `~/.claude/settings.json`:
```json
{
"model": "opus[1m]",
"effortLevel": "high"
}
```
### Override a shared skill
Each skill has `effort` and optionally `model` set in its frontmatter. You can't override a specific skill's frontmatter from settings — but you can override the entire skill by creating a personal copy with the same name at a higher-priority location.
Personal skills (`~/.claude/skills/`) take precedence over project skills (`.claude/skills/`). To override `/test` with a different effort level:
```bash
# Copy the shared skill to your personal config
mkdir -p ~/.claude/skills/test
cp .claude/skills/test/SKILL.md ~/.claude/skills/test/SKILL.md
# Edit the frontmatter to change effort, model, or anything else
```
Your personal version takes priority. The shared version is ignored for you but still works for everyone else.
### Override a shared agent
Same pattern as skills. Personal agents (`~/.claude/agents/`) take precedence over project agents (`.claude/agents/`):
```bash
# Override go-reviewer with your own version
cp .claude/agents/go-reviewer.md ~/.claude/agents/go-reviewer.md
# Edit to change model, tools, or review criteria
```
### Local settings
Create `.claude/settings.local.json` (gitignored) for personal permission overrides. Local settings take priority over project settings in `.claude/settings.json`.
Common things to add:
- Git write permissions (the shared setup only allows read operations)
- MCP server tool permissions
- Additional `make` or `bash` commands specific to your workflow
- Additional hooks
```json
{
"permissions": {
"allow": [
"Bash(git add*)",
"Bash(git commit*)",
"Bash(git push)",
"mcp__github__*",
"mcp__my-mcp-server__*"
]
},
"hooks": {
"PostToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "my-personal-hook.sh",
"timeout": 10
}
]
}
]
}
}
```
Local hooks run in addition to shared hooks, not instead of them. Permission rules merge across levels, with deny taking precedence: if the shared settings deny something, local settings can't override it.
### Personal CLAUDE.md
Create a root-level `CLAUDE.md` (gitignored) for personal instructions that apply on top of the shared `.claude/CLAUDE.md`. Use this for preferences like MCP tool mandates, git workflow rules, or personal conventions. Both files load at session start.
### Personal rules
Create rules at `~/.claude/rules/` for conventions that apply across all your projects. Project rules in `.claude/rules/` and personal rules in `~/.claude/rules/` both load — they don't override each other.
### MCP servers
The shared setup doesn't require any MCP servers. Skills use the `gh` CLI for GitHub operations, which works without MCP. However, MCP servers can enhance your workflow:
```bash
# GitHub MCP — richer GitHub integration beyond what gh CLI provides
claude mcp add --transport http github https://api.github.com/mcp
# Semantic code search — understand code structure, not just text patterns
claude mcp add --transport stdio serena -- uvx --from git+https://github.com/oraios/serena serena start-mcp-server --context=claude-code --project-from-cwd
# Documentation search — look up third-party library docs
claude mcp add --transport stdio context7 -- npx -y @upstash/context7-mcp@latest
```
After adding an MCP server, grant its tools in your local settings:
```json
{
"permissions": {
"allow": ["mcp__github__*", "mcp__serena__*", "mcp__context7__*"]
}
}
```
### Plugins
Plugins bundle skills, agents, hooks, and MCP configs. Browse and install from the marketplace:
```bash
claude plugins list # Browse available plugins
claude plugins install <name> # Install a plugin
claude plugins remove <name> # Remove a plugin
```
Useful plugins for Fleet development: `gopls-lsp` (Go LSP), `typescript-lsp` (TS LSP), `feature-dev` (code explorer, architect, and reviewer agents), and `security-guidance` (security warnings on sensitive patterns).
### Override precedence summary
| What | Personal location | Behavior |
|------|------------------|----------|
| Skills | `~/.claude/skills/<name>/SKILL.md` | Replaces the project skill with the same name |
| Agents | `~/.claude/agents/<name>.md` | Replaces the project agent with the same name |
| Rules | `~/.claude/rules/<name>.md` | Additive — loads alongside project rules |
| Settings | `.claude/settings.local.json` | Merges with project settings; deny rules can't be overridden |
| Hooks | `.claude/settings.local.json` | Additive — runs alongside project hooks |
| CLAUDE.md | Root `CLAUDE.md` (gitignored) | Additive — loads alongside `.claude/CLAUDE.md` |
| Memory | `~/.claude/projects/*/memory/` | Personal only — not shared |
## Contribute to this configuration
1. Create a branch.
2. Edit files in `.claude/`.
3. Start a new Claude Code session to test. Use `/context` to verify your changes load correctly.
4. Open a PR for review.
### Add a skill
Create `.claude/skills/your-skill/SKILL.md`:
```yaml
---
name: your-skill
description: When to trigger. Use when asked to "do X" or "Y".
allowed-tools: Read, Grep, Glob, Bash(specific command*)
disable-model-invocation: true # Optional: user-only, no auto-trigger
context: fork # Optional: run in isolated subagent
---
Instructions for Claude when this skill is invoked.
Use $ARGUMENTS for user input.
```
### Add a rule
Create `.claude/rules/your-rule.md`:
```yaml
---
paths:
- "path/**/*.ext"
---
# Rule title
- Convention 1
- Convention 2
```
### Add an agent
Create `.claude/agents/your-agent.md`:
```yaml
---
name: your-agent
description: What it does. Include "PROACTIVELY" for auto-invocation.
tools: Read, Grep, Glob, Bash
model: sonnet # or opus for deep reasoning
---
System prompt describing the agent's role and review criteria.
```

View file

@ -0,0 +1,60 @@
---
name: fleet-security-auditor
description: Fleet-specific security analysis covering MDM, osquery, API auth, and device management threat models. Use when touching auth, MDM, enrollment, or user data.
tools: Read, Grep, Glob, Bash
model: opus
---
You are a security engineer specializing in the Fleet codebase. Think like an attacker targeting a device management platform that controls thousands of endpoints.
## Fleet-Specific Threat Categories
### API Authorization
- Missing `svc.authz.Authorize(ctx, entity, fleet.ActionX)` calls in service methods
- Privilege escalation between teams (team admin accessing another team's data)
- IDOR (insecure direct object references) on host, policy, or query IDs
- Viewer context: always derive user identity from `viewer.FromContext(ctx)`, never from request data
### MDM Profile Payloads
- Malicious configuration profiles (Apple .mobileconfig, Windows .xml, Android .json)
- Profile injection that could modify device security settings
- Certificate payloads with untrusted or self-signed certs
- DDM declaration validation against Apple reference
### osquery Query Injection
- SQL injection through scheduled queries or live query parameters
- Queries accessing sensitive host data beyond intended scope
- Query result exfiltration through webhook or logging channels
### Enrollment & Secrets
- Enrollment secret exposure in API responses or logs
- Enrollment secret scoping (must be team-specific, not global)
- Orbit agent authentication token handling
### Certificate & SCEP Handling
- Private key exposure in logs, responses, or error messages
- Certificate chain validation completeness
- SCEP challenge password handling
### Team Permission Boundaries
- Cross-team data leakage in list/search endpoints
- Team isolation violations in batch operations
- Global vs team-scoped resource access
### License Enforcement
- Enterprise features accessible without valid license
- License check bypasses in API or service layer
### PII & Sensitive Data
- Host identifiers, serial numbers, or user emails in log output
- Sensitive MDM payloads in error messages
- Enrollment secrets or API tokens in debug logging
## Output Format
For each finding:
- **Severity**: CRITICAL / HIGH / MEDIUM / LOW
- **Location**: File and line
- **Vulnerability**: What the issue is
- **Exploit scenario**: How an attacker could exploit this in a Fleet deployment
- **Fix**: Specific remediation

View file

@ -0,0 +1,48 @@
---
name: frontend-reviewer
description: Reviews React/TypeScript frontend changes in Fleet for conventions, type safety, component structure, and accessibility. Run PROACTIVELY after modifying frontend files.
tools: Read, Grep, Glob, Bash
model: sonnet
---
You are a frontend code reviewer specialized in Fleet's React/TypeScript codebase. Review changes with knowledge of Fleet's specific patterns and conventions.
## What you check
### TypeScript strictness
- No `any` types — use `unknown` with type guards or proper interfaces
- Interfaces from `frontend/interfaces/` used correctly (IHost, IUser, etc.)
- Proper type narrowing before accessing nullable fields
### React Query patterns
- `useQuery` with proper `[queryKey, dependency]` array and `enabled` option
- `useMutation` for write operations
- No manual useState/useEffect for data fetching when React Query is appropriate
### Component structure
- Follows 4-file pattern: `ComponentName.tsx`, `_styles.scss`, `ComponentName.tests.tsx`, `index.ts`
- New components created with `./frontend/components/generate -n Name -p path`
- Proper named exports (not default exports for new code)
### SCSS / BEM conventions
- `const baseClass = "component-name"` defined at top
- BEM elements: `${baseClass}__element`
- BEM modifiers: `${baseClass}--modifier`
- Styles in `_styles.scss` files
### API service usage
- Uses `sendRequest` from `frontend/services/`
- Endpoint constants from `frontend/utilities/endpoints.ts`
- Proper error handling for API calls
### Accessibility
- ARIA attributes on interactive elements
- Keyboard navigation support
- Semantic HTML elements
## Output format
Organize findings by severity:
1. **Blocking** — must fix before merge (type errors, broken patterns, accessibility violations)
2. **Important** — should fix (convention violations, missing types)
3. **Minor** — style nits and suggestions

View file

@ -1,3 +1,10 @@
---
name: go-reviewer
description: Reviews Go code changes in Fleet for bugs, conventions, and security. Run PROACTIVELY after modifying Go files.
tools: Read, Grep, Glob, Bash
model: sonnet
---
# Go Code Reviewer for Fleet
You are a Go code reviewer specialized in the Fleet codebase. Review code changes with deep knowledge of Fleet's patterns and conventions.

View file

@ -1,38 +0,0 @@
Read the project context file at `~/.fleet/claude-projects/$ARGUMENTS.md`. This contains background, decisions, and conventions for a specific workstream within Fleet.
Also check for a project-specific memory file named `$ARGUMENTS.md` in your auto memory directory (the persistent memory directory mentioned in your system instructions). If it exists, read it too — it contains things learned while working on this project in previous sessions.
If the project context file was found, give a brief summary of what you know and ask what we're working on today.
If the project context file doesn't exist:
1. Tell the user no project named "$ARGUMENTS" was found.
2. List any existing `.md` files in `~/.fleet/claude-projects/` so they can see what's available.
3. Ask if they'd like to initialize a new project with that name.
4. If they don't want to initialize, stop here.
5. If they do, ask them to brain-dump everything they know about the workstream — the goal, what areas of the codebase it touches, key decisions, gotchas, anything they've been repeating at the start of each session. A sentence is fine, a paragraph is better. Also offer: "I can also scan your recent session transcripts for relevant context — would you like me to look back through recent chats?"
6. If they want you to scan prior sessions, look at the JSONL transcript files in the Claude project directory (the same directory as your auto memory, but the `.jsonl` files). Read recent ones (last 5-10), skimming for messages related to the workstream. These are large files, so read selectively — check the first few hundred lines of each to gauge relevance before reading more deeply.
7. Using their description, any prior session context, and codebase exploration, find relevant files, patterns, types, and existing implementations related to the workstream.
8. Create `~/.fleet/claude-projects/$ARGUMENTS.md` populated with what you found, using this structure:
```markdown
# Project: $ARGUMENTS
## Background
<!-- What is this workstream about, in the user's words + what you learned -->
## How It Works
<!-- Key mechanisms, patterns, and code flow you discovered -->
## Key Files
<!-- Important file paths for this workstream, with brief descriptions -->
## Key Decisions
<!-- Important architectural or design decisions -->
## Status
<!-- What's done, what remains -->
```
9. Show the user what you wrote and ask if they'd like to adjust anything before continuing.
As you work on a project, update the memory file (in your auto memory directory, named `$ARGUMENTS.md`) with useful discoveries — gotchas, important file paths, patterns — but not session-specific details.

View file

@ -1,10 +0,0 @@
Run Go tests related to my recent changes. Look at `git diff` and `git diff --cached` to determine which packages were modified.
For each modified package, run the tests with appropriate env vars:
- If the package is under `server/datastore/mysql`: use `MYSQL_TEST=1`
- If the package is under `server/service`: use `MYSQL_TEST=1 REDIS_TEST=1`
- Otherwise: run without special env vars
If an argument is provided, use it as a `-run` filter: $ARGUMENTS
Show a summary of results: which packages passed, which failed, and any failure details.

25
.claude/goimports.sh Executable file
View file

@ -0,0 +1,25 @@
#!/bin/sh
# PostToolUse hook: run goimports on Go files after Edit/Write
# Receives tool event JSON on stdin
INPUT=$(cat)
# Extract file_path with grep to avoid jq parse errors from control chars in tool input
FILE_PATH=$(printf '%s' "$INPUT" | grep -o '"file_path"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"file_path"[[:space:]]*:[[:space:]]*"//;s/"$//')
if [ -z "$FILE_PATH" ]; then
exit 0
fi
case "$FILE_PATH" in
*.go)
if command -v goimports >/dev/null 2>&1; then
goimports -w "$FILE_PATH" 2>/dev/null
elif command -v gofumpt >/dev/null 2>&1; then
gofumpt -w "$FILE_PATH" 2>/dev/null
else
gofmt -w "$FILE_PATH" 2>/dev/null
fi
;;
esac
exit 0

View file

@ -0,0 +1,49 @@
#!/bin/sh
# PreToolUse hook: block dangerous bash commands
# Exit 0 = allow, Exit 2 = block
INPUT=$(cat)
# Extract command with grep to avoid jq parse errors from control chars in tool input
COMMAND=$(printf '%s' "$INPUT" | grep -o '"command"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"command"[[:space:]]*:[[:space:]]*"//;s/"$//')
if [ -z "$COMMAND" ]; then
exit 0
fi
# Block rm -rf with dangerous targets (/, ~, *, bare . but not ./path)
echo "$COMMAND" | grep -qE 'rm\s+-rf\s+/' && {
echo "BLOCKED: rm -rf with absolute path" >&2
exit 2
}
echo "$COMMAND" | grep -qE 'rm\s+-rf\s+~' && {
echo "BLOCKED: rm -rf home directory" >&2
exit 2
}
echo "$COMMAND" | grep -qE 'rm\s+-rf\s+\*' && {
echo "BLOCKED: rm -rf wildcard" >&2
exit 2
}
echo "$COMMAND" | grep -qE 'rm\s+-rf\s+\.$' && {
echo "BLOCKED: rm -rf current directory" >&2
exit 2
}
# Block force push to main/master
echo "$COMMAND" | grep -qiE 'git\s+push\s+.*(--force|-f)\s+.*(main|master)' && {
echo "BLOCKED: force push to main/master" >&2
exit 2
}
# Block hard reset to remote
echo "$COMMAND" | grep -qiE 'git\s+reset\s+--hard\s+origin/' && {
echo "BLOCKED: hard reset to remote" >&2
exit 2
}
# Block pipe-to-shell
echo "$COMMAND" | grep -qiE '(curl|wget)\s+.*\|\s*(ba)?sh' && {
echo "BLOCKED: pipe to shell" >&2
exit 2
}
exit 0

View file

@ -0,0 +1,48 @@
#!/bin/sh
# PreToolUse hook: block dangerous bash commands
# Exit 0 = allow, Exit 2 = block
INPUT=$(cat)
COMMAND=$(echo "$INPUT" | jq -r '.tool_input.command // empty')
if [ -z "$COMMAND" ]; then
exit 0
fi
# Block rm -rf with dangerous targets (/, ~, *, bare . but not ./path)
echo "$COMMAND" | grep -qE 'rm\s+-rf\s+/' && {
echo "BLOCKED: rm -rf with absolute path" >&2
exit 2
}
echo "$COMMAND" | grep -qE 'rm\s+-rf\s+~' && {
echo "BLOCKED: rm -rf home directory" >&2
exit 2
}
echo "$COMMAND" | grep -qE 'rm\s+-rf\s+\*' && {
echo "BLOCKED: rm -rf wildcard" >&2
exit 2
}
echo "$COMMAND" | grep -qE 'rm\s+-rf\s+\.$' && {
echo "BLOCKED: rm -rf current directory" >&2
exit 2
}
# Block force push to main/master
echo "$COMMAND" | grep -qiE 'git\s+push\s+.*(--force|-f)\s+.*(main|master)' && {
echo "BLOCKED: force push to main/master" >&2
exit 2
}
# Block hard reset to remote
echo "$COMMAND" | grep -qiE 'git\s+reset\s+--hard\s+origin/' && {
echo "BLOCKED: hard reset to remote" >&2
exit 2
}
# Block pipe-to-shell
echo "$COMMAND" | grep -qiE '(curl|wget)\s+.*\|\s*(ba)?sh' && {
echo "BLOCKED: pipe to shell" >&2
exit 2
}
exit 0

84
.claude/hooks/lint-on-save.sh Executable file
View file

@ -0,0 +1,84 @@
#!/bin/sh
# PostToolUse hook: auto-fix lint issues, then report anything remaining
# Uses the project's own make lint-go-incremental (only checks changes since branching from main)
# Runs after formatters (goimports, prettier) so it only sees convention violations
INPUT=$(cat)
FILE_PATH=$(echo "$INPUT" | jq -r '.tool_input.file_path // empty')
if [ -z "$FILE_PATH" ]; then
exit 0
fi
# Need to be in the project root for make targets
PROJECT_DIR=$(echo "$INPUT" | jq -r '.cwd // empty')
if [ -z "$PROJECT_DIR" ]; then
PROJECT_DIR="$CLAUDE_PROJECT_DIR"
fi
if [ -n "$PROJECT_DIR" ]; then
cd "$PROJECT_DIR" || exit 0
fi
TMPFILE=$(mktemp)
trap 'rm -f "$TMPFILE"' EXIT
case "$FILE_PATH" in
*.go)
# Skip third_party (with or without leading path)
case "$FILE_PATH" in
third_party/*|*/third_party/*) exit 0 ;;
esac
# First pass: auto-fix what we can (uses golangci-lint directly for --fix)
PKG_DIR=$(dirname "$FILE_PATH")
if command -v golangci-lint >/dev/null 2>&1; then
golangci-lint run --fix "$PKG_DIR/..." > /dev/null 2>&1
fi
# Second pass: use project's incremental linter (only changes since branching from main)
if [ -f Makefile ] && grep -q "lint-go-incremental" Makefile; then
make lint-go-incremental > "$TMPFILE" 2>&1
elif command -v golangci-lint >/dev/null 2>&1; then
# Fallback if make target isn't available
golangci-lint run "$PKG_DIR/..." > "$TMPFILE" 2>&1
else
exit 0
fi
# Filter out noise (level=warning, command echo, summary) and keep only real violations
# Real violations look like: path/to/file.go:LINE:COL: message (lintername)
VIOLATIONS=$(grep -v "^level=" "$TMPFILE" | grep -v "^\\./" | grep -v "^[0-9]* issues" | grep -v "^$" | grep -E '\.go:[0-9]+:[0-9]+:' | head -20)
if [ -n "$VIOLATIONS" ]; then
echo "$VIOLATIONS" | jq -Rsc --arg fp "$FILE_PATH" \
'{hookSpecificOutput: {hookEventName: "PostToolUse", additionalContext: ("make lint-go-incremental found issues after editing " + $fp + ":\n" + .)}}'
fi
;;
*.ts|*.tsx)
# Determine eslint binary (prefer local, avoid npx auto-install)
if [ -x ./node_modules/.bin/eslint ]; then
ESLINT="./node_modules/.bin/eslint"
elif command -v npx >/dev/null 2>&1 && npx --no-install eslint --version >/dev/null 2>&1; then
ESLINT="npx --no-install eslint"
else
exit 0
fi
if [ -n "$ESLINT" ]; then
# First pass: auto-fix
$ESLINT --fix "$FILE_PATH" > /dev/null 2>&1
# Second pass: capture remaining issues (include stderr for config/parser errors)
$ESLINT "$FILE_PATH" > "$TMPFILE" 2>&1
if grep -q "error\|warning\|Error:" "$TMPFILE"; then
jq -Rsc --arg fp "$FILE_PATH" \
'{hookSpecificOutput: {hookEventName: "PostToolUse", additionalContext: ("ESLint found issues after editing " + $fp + ":\n" + .)}}' \
< "$TMPFILE"
fi
fi
;;
esac
exit 0

View file

@ -0,0 +1,23 @@
#!/bin/sh
# PostToolUse hook: run prettier on frontend files after Edit/Write
# Receives tool event JSON on stdin
INPUT=$(cat)
FILE_PATH=$(echo "$INPUT" | jq -r '.tool_input.file_path // empty')
if [ -z "$FILE_PATH" ]; then
exit 0
fi
case "$FILE_PATH" in
*.ts|*.tsx|*.scss|*.css|*.js|*.jsx)
# Use local prettier (avoid npx auto-install over network)
if [ -x ./node_modules/.bin/prettier ]; then
./node_modules/.bin/prettier --write "$FILE_PATH" 2>/dev/null
elif command -v npx >/dev/null 2>&1 && npx --no-install prettier --version >/dev/null 2>&1; then
npx --no-install prettier --write "$FILE_PATH" 2>/dev/null
fi
;;
esac
exit 0

82
.claude/lint-on-save.sh Executable file
View file

@ -0,0 +1,82 @@
#!/bin/sh
# PostToolUse hook: auto-fix lint issues, then report anything remaining
# Runs golangci-lint on the affected package (not make lint-go-incremental, which is too
# slow for a PostToolUse hook). Runs after formatters (goimports, prettier) so it only
# sees convention violations.
INPUT=$(cat)
# Extract file_path with grep to avoid jq parse errors from control chars in tool input
FILE_PATH=$(printf '%s' "$INPUT" | grep -o '"file_path"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"file_path"[[:space:]]*:[[:space:]]*"//;s/"$//')
if [ -z "$FILE_PATH" ]; then
exit 0
fi
# Need to be in the project root for make targets
PROJECT_DIR=$(printf '%s' "$INPUT" | grep -o '"cwd"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"cwd"[[:space:]]*:[[:space:]]*"//;s/"$//')
if [ -z "$PROJECT_DIR" ]; then
PROJECT_DIR="$CLAUDE_PROJECT_DIR"
fi
if [ -n "$PROJECT_DIR" ]; then
cd "$PROJECT_DIR" || exit 0
fi
TMPFILE=$(mktemp)
trap 'rm -f "$TMPFILE"' EXIT
case "$FILE_PATH" in
*.go)
# Skip third_party (with or without leading path)
case "$FILE_PATH" in
third_party/*|*/third_party/*) exit 0 ;;
esac
# First pass: auto-fix what we can (uses golangci-lint directly for --fix)
PKG_DIR=$(dirname "$FILE_PATH")
if command -v golangci-lint >/dev/null 2>&1; then
golangci-lint run --fix "$PKG_DIR/..." > /dev/null 2>&1
fi
# Second pass: lint the affected package (fast) and report remaining issues
if command -v golangci-lint >/dev/null 2>&1; then
golangci-lint run "$PKG_DIR/..." > "$TMPFILE" 2>&1
else
exit 0
fi
# Filter to real violations: path/to/file.go:LINE:COL: message (lintername)
VIOLATIONS=$(grep -E '\.go:[0-9]+:[0-9]+:' "$TMPFILE" | head -20)
if [ -n "$VIOLATIONS" ]; then
echo "$VIOLATIONS" | jq -Rsc --arg fp "$FILE_PATH" \
'{hookSpecificOutput: {hookEventName: "PostToolUse", additionalContext: ("golangci-lint found issues after editing " + $fp + ":\n" + .)}}'
fi
;;
*.ts|*.tsx)
# Determine eslint binary (prefer local, avoid npx auto-install)
if [ -x ./node_modules/.bin/eslint ]; then
ESLINT="./node_modules/.bin/eslint"
elif command -v npx >/dev/null 2>&1 && npx --no-install eslint --version >/dev/null 2>&1; then
ESLINT="npx --no-install eslint"
else
exit 0
fi
if [ -n "$ESLINT" ]; then
# First pass: auto-fix
$ESLINT --fix "$FILE_PATH" > /dev/null 2>&1
# Second pass: capture remaining issues (include stderr for config/parser errors)
$ESLINT "$FILE_PATH" > "$TMPFILE" 2>&1
if grep -q "error\|warning\|Error:" "$TMPFILE"; then
jq -Rsc --arg fp "$FILE_PATH" \
'{hookSpecificOutput: {hookEventName: "PostToolUse", additionalContext: ("ESLint found issues after editing " + $fp + ":\n" + .)}}' \
< "$TMPFILE"
fi
fi
;;
esac
exit 0

24
.claude/prettier-frontend.sh Executable file
View file

@ -0,0 +1,24 @@
#!/bin/sh
# PostToolUse hook: run prettier on frontend files after Edit/Write
# Receives tool event JSON on stdin
INPUT=$(cat)
# Extract file_path with grep to avoid jq parse errors from control chars in tool input
FILE_PATH=$(printf '%s' "$INPUT" | grep -o '"file_path"[[:space:]]*:[[:space:]]*"[^"]*"' | head -1 | sed 's/.*"file_path"[[:space:]]*:[[:space:]]*"//;s/"$//')
if [ -z "$FILE_PATH" ]; then
exit 0
fi
case "$FILE_PATH" in
*.ts|*.tsx|*.scss|*.css|*.js|*.jsx)
# Use local prettier (avoid npx auto-install over network)
if [ -x ./node_modules/.bin/prettier ]; then
./node_modules/.bin/prettier --write "$FILE_PATH" 2>/dev/null
elif command -v npx >/dev/null 2>&1 && npx --no-install prettier --version >/dev/null 2>&1; then
npx --no-install prettier --write "$FILE_PATH" 2>/dev/null
fi
;;
esac
exit 0

View file

@ -0,0 +1,36 @@
---
paths:
- "server/service/**/*.go"
---
# Fleet API endpoint conventions
These conventions apply when working on API endpoints in the service layer. Not every file in `server/service/` defines endpoints, but the patterns below should be followed whenever you create or modify one.
## Endpoint registration
Register endpoints in `server/service/handler.go`:
```go
ue.POST("/api/_version_/fleet/{resource}", endpointFunc, requestType{})
ue.GET("/api/_version_/fleet/{resource}", endpointFunc, nil)
```
`_version_` is replaced with the actual API version at runtime.
## API versioning
- `ue.EndingAtVersion("v1")` — endpoint only available in v1 and earlier
- `ue.StartingAtVersion("2022-04")` — endpoint available from 2022-04 onward
- Current versions: `v1`, `2022-04`
- New endpoints should use `StartingAtVersion("2022-04")`
## Request body size limits
Use `ue.WithRequestBodySizeLimit(N)` for endpoints accepting large payloads (e.g., bootstrap packages, installers).
## Error response pattern
Return errors in the response body, not as the second return:
```go
return xResponse{Err: err}, nil // correct
return nil, err // WRONG for Fleet endpoints
```
Every response struct needs: `func (r xResponse) Error() error { return r.Err }`
## Reference example
See `server/service/vulnerabilities.go` for a complete example of the request/response/endpoint/service pattern.

View file

@ -0,0 +1,45 @@
---
paths:
- "server/datastore/**/*.go"
---
# Fleet Database Conventions
## Migration Files
- Location: `server/datastore/mysql/migrations/tables/`
- Naming: `YYYYMMDDHHMMSS_CamelCaseName.go` (timestamp + descriptive CamelCase)
- Every migration MUST have a corresponding `_test.go` file
- Structure:
```go
func init() {
MigrationClient.AddMigration(Up_YYYYMMDDHHMMSS, Down_YYYYMMDDHHMMSS)
}
func Up_YYYYMMDDHHMMSS(tx *sql.Tx) error { ... }
func Down_YYYYMMDDHHMMSS(tx *sql.Tx) error { return nil } // always no-op
```
- Test pattern: `applyUpToPrev(t)` → set up data → `applyNext(t, db)` → verify
- Create with: `make migration name=YourChangeName`
## Query Building
- Use `goqu` (github.com/doug-martin/goqu/v9) for SQL query building
- Pattern: `dialect.From(goqu.I("table_name")).Select(...).Where(...)`
- NEVER use string concatenation for SQL — parameterized queries only
- The `gosec` linter checks for SQL concatenation (G202)
## Reader vs Writer
- Reads: `ds.reader(ctx)` — may hit a read replica
- Writes: `ds.writer(ctx)` — always hits the primary
- Using the wrong one causes stale reads or replica lag issues
## Testing
- Integration tests require `MYSQL_TEST=1`: `MYSQL_TEST=1 go test ./server/datastore/mysql/...`
- Use `CreateMySQLDS(t)` helper for test datastore setup
- Table-driven tests with `t.Run` subtests
## Transactions
- Inside `withTx`/`withRetryTxx` callbacks, use the transaction argument — NEVER call `ds.reader(ctx)` or `ds.writer(ctx)` inside a transaction (custom linter rule catches this)
- Same applies to any function that receives a `sqlx.ExtContext` or `sqlx.ExecContext` as an argument — use that argument, not the datastore's reader/writer
## Batch Operations
- Use configurable batch size variables for large operations
- Order key allowlists for user-facing sort fields (prevent SQL injection via ORDER BY)

View file

@ -0,0 +1,90 @@
---
paths:
- "frontend/**/*.ts"
- "frontend/**/*.tsx"
---
# Fleet Frontend Conventions
## Component Structure
Every component should have this 4-file structure:
- `ComponentName.tsx` — Main component
- `_styles.scss` — Component-specific SCSS styles
- `ComponentName.tests.tsx` — Tests
- `index.ts` — Named export
Use the component generator for new components:
```
./frontend/components/generate -n PascalCaseName -p optional/path/to/parent
```
## React Query
- Use `useQuery` for data fetching with `[queryKey, dependency]` and `enabled` option
- Prefer React Query over manual useState/useEffect for API data
- Use `useMutation` for write operations — invalidate related queries on success
- Query key pattern: `["resource", id, teamId]` — include all dependencies
## API Services
- API clients live in `frontend/services/entities/`
- Use `sendRequest(method, path, body?, queryParams?)` from `frontend/services/`
- Endpoint constants in `frontend/utilities/endpoints.ts`
- Build query strings with `buildQueryStringFromParams()` from `frontend/utilities/url/`
- Build full paths with `getPathWithQueryParams(path, params)` — auto-filters undefined/null values
## Permission Checking
Use helpers from `frontend/utilities/permissions/permissions.ts`:
- Global roles: `permissions.isGlobalAdmin(user)`, `isGlobalMaintainer(user)`, `isOnGlobalTeam(user)`
- Team roles: `permissions.isTeamAdmin(user, teamId)`, `isTeamMaintainer(user, teamId)`, `isTeamObserver(user, teamId)`
- Multi-team: `permissions.isAnyTeamAdmin(user)`, `isOnlyObserver(user)`
- License: `permissions.isPremiumTier(config)`, `isFreeTier(config)`
- MDM: `permissions.isMacMdmEnabledAndConfigured(config)`, `isWindowsMdmEnabledAndConfigured(config)`
## Team Context
Use the `useTeamIdParam` hook for team-scoped pages:
- `currentTeamId`: -1 (All teams), 0 (No team), or positive team ID
- `teamIdForApi`: undefined (All teams), 0 (No team), or positive ID — **always use this for API calls**
- `handleTeamChange(newTeamId)` to switch teams
- `isTeamAdmin`, `isTeamMaintainer`, `isObserverPlus` for role checks
## Notifications
- Use `renderFlash(alertType, message)` from `NotificationContext`
- Types: `"success"`, `"error"`, `"warning-filled"`
- Use `renderMultiFlash()` for batch operations
## XSS Prevention
- ALWAYS sanitize user-generated HTML with `DOMPurify.sanitize(html, options)` before `dangerouslySetInnerHTML`
- Configure allowed tags/attributes explicitly: `{ ADD_ATTR: ["target"] }`
## String Utilities
Use helpers from `frontend/utilities/strings/stringUtils.ts`:
- `capitalize(str)`, `capitalizeRole(role)` — handle special casing (Observer+)
- `pluralize(count, singular, pluralSuffix, singularSuffix)` — "1 host" vs "2 hosts"
- `stripQuotes(str)`, `strToBool(str)` — input parsing
- `enforceFleetSentenceCasing(str)` — respects Fleet stylization rules
## Styling (SCSS + BEM)
- Define `const baseClass = "component-name"` at the top of the component
- Elements: `` className={`${baseClass}__element-name`} ``
- Modifiers: `` className={`${baseClass}--modifier`} ``
- Use `classnames()` for conditional classes
- Style files use underscore prefix: `_styles.scss`
## Interfaces & Types
- Interface files live in `frontend/interfaces/` with `I` prefix: `IHost`, `IUser`, `IPack`
- Legacy pattern: some files export both PropTypes (default export) and TypeScript interfaces (named export)
- New code should use TypeScript interfaces only
## Hooks & Context
- Custom hooks in `frontend/hooks/` — e.g., `useTeamIdParam`, `useCheckboxListStateManagement`
- Context providers in `frontend/context/``AppContext` for global state, `NotificationContext` for flash messages
## Terminology
- "Teams" are now called "fleets" in the product. Code still uses `team_id`, `useTeamIdParam`, `permissions.isTeamAdmin`, etc. — don't rename existing APIs, but use "fleet" in new user-facing strings and comments.
- "Queries" are now called "reports." The word "query" now refers solely to a SQL query. Code still uses `useQuery`, `queryKey`, etc. for React Query — that's unrelated to the product terminology change.
## Linting & Formatting
- ESLint: extends airbnb + typescript-eslint + prettier
- Prettier: default config (`.prettierrc.json`)
- `console.log` is allowed (`no-console` is off) — useful for debugging, but clean up before merging
- `react-hooks/exhaustive-deps` is enforced as a warning — include all dependencies in hook dependency arrays
- Run `make lint-js` or `yarn lint` and `npx prettier --check frontend/` before submitting

View file

@ -0,0 +1,105 @@
---
paths:
- "server/**/*.go"
- "cmd/**/*.go"
- "orbit/**/*.go"
- "ee/**/*.go"
- "pkg/**/*.go"
- "tools/**/*.go"
- "client/**/*.go"
- "test/**/*.go"
---
# Fleet Go Backend Conventions
## Error Handling
- Wrap errors with `ctxerr.Wrap(ctx, err, "description")` — never `pkg/errors` or `fmt.Errorf` with `%w`
- For error messages without wrapping, use `errors.New("msg")` not `fmt.Errorf("msg")` (the linter catches this)
- Banned imports: `github.com/pkg/errors`, `github.com/valyala/fastjson`, `github.com/valyala/fasttemplate`
- Use the right error type for the right situation:
- `fleet.NewInvalidArgumentError(field, reason)` — input validation (422). Accumulate with `.Append(field, reason)`, check `.HasErrors()`
- `&fleet.BadRequestError{Message: "..."}` — malformed request (400)
- `fleet.NewAuthFailedError()` / `fleet.NewAuthRequiredError()` — auth failures (401)
- `fleet.NewPermissionError(msg)` — authorized but insufficient role (403)
- Implement `IsNotFound() bool` interface — resource not found. Check with `fleet.IsNotFound(err)`
- `&fleet.ConflictError{Message: "..."}` — duplicate/conflict (409)
- Check error types with: `fleet.IsNotFound(err)`, `fleet.IsAlreadyExists(err)`
## Input Validation
- Validate in service methods, not in endpoint functions
- Accumulate all errors before returning:
```go
invalid := fleet.NewInvalidArgumentError("name", "cannot be empty")
if badCondition {
invalid.Append("email", "must be valid")
}
if invalid.HasErrors() {
return invalid
}
```
## Service Methods
- Signature: `func (svc *Service) MethodName(ctx context.Context, ...) (..., error)`
- Start with authorization: `svc.authz.Authorize(ctx, &fleet.Entity{}, fleet.ActionX)`
- For entity-specific auth, double-authorize: generic check first, load entity, then team-scoped check:
```go
if err := svc.authz.Authorize(ctx, &fleet.Host{}, fleet.ActionRead); err != nil { return nil, err }
host, err := svc.ds.Host(ctx, hostID)
if err != nil { return nil, ctxerr.Wrap(ctx, err, "get host") }
if err := svc.authz.Authorize(ctx, host, fleet.ActionRead); err != nil { return nil, err }
```
- Return errors via ctxerr wrapping
## Viewer Context
- Get current user: `vc, ok := viewer.FromContext(ctx)` — NEVER trust user identity from request body
- Helpers: `vc.UserID()`, `vc.Email()`, `vc.IsLoggedIn()`, `vc.CanPerformActions()`
- System operations: `viewer.NewSystemContext(ctx)` for admin-level automated actions
## Pagination
- Use `fleet.ListOptions` for all list endpoints (Page, PerPage, OrderKey, OrderDirection, MatchQuery, After)
- Return `*fleet.PaginationMetadata` when `IncludeMetadata` is true
- Cursor pagination: check `ListOptions.UsesCursorPagination()`
## Request/Response Pattern
- Request structs: lowercase type, json/url tags: `type listEntitiesRequest struct`
- Response structs: include `Err error` field and `func (r xResponse) Error() error { return r.Err }`
- Endpoint functions: `func xEndpoint(ctx context.Context, request interface{}, svc fleet.Service) (fleet.Errorer, error)`
- Errors go in the response body: `return xResponse{Err: err}, nil`
## Logging
- Use slog with context: `logger.InfoContext(ctx, "message", "key", value)`
- NEVER use bare `slog.Debug`, `slog.Info`, `slog.Warn`, `slog.Error` — the `forbidigo` linter rejects these
- NEVER use `print()` or `println()` — use structured logging
## Imports & Utilities
- Internal packages: `github.com/fleetdm/fleet/v4/server/` prefix
- **HTTP clients**: Use `fleethttp.NewClient()` — never `http.Client{}` or `new(http.Client)` directly (custom linter rule)
- **Pointers (Go 1.26+)**: Use `new(expression)` for pointer values: `new("value")`, `new(true)`, `new(yearsSince(born))`. Do NOT use the `server/ptr` package (`ptr.String()`, `ptr.Uint()`, etc.) in new code — it's legacy. You'll see it throughout the existing codebase but should not follow that pattern.
- **Random numbers**: use `math/rand/v2` instead of `math/rand`
- Sets: use `map[T]struct{}`, convert to slice with `slices.Collect(maps.Keys(m))`
- Flexible JSON: use `json.RawMessage` for configs stored as JSON blobs
## Context Utilities
- `ctxdb.RequirePrimary(ctx, true)` — force reads on primary DB (use before read-then-write)
- `ctxdb.BypassCachedMysql(ctx, true)` — disable MySQL cache layer
- `ctxerr.Wrap(ctx, err, "msg")` — ALWAYS use for error wrapping
## Testing
- Use `require` and `assert` from `github.com/stretchr/testify`
- Mock invocation tracking: check `ds.{FuncName}FuncInvoked` bool (auto-set by generated mocks)
- Run `go test ./server/service/` after adding new datastore interface methods — uninitialized mocks crash other tests
- Integration tests need `MYSQL_TEST=1 REDIS_TEST=1`
- Use `t.Context()` instead of `context.Background()`
## Bounded contexts
Some domains use a self-contained bounded context pattern instead of the traditional `fleet/``service/``datastore/` layers:
- `server/activity/` — internal types, mysql, service, API, and bootstrap in one directory
- `server/mdm/` — similar self-contained structure for MDM
When working in these directories, follow the local patterns (internal packages, local types) rather than the top-level Fleet architecture.
## Linting
- Follow `.golangci.yml` — enabled linters: depguard, forbidigo, gosec, gocritic, revive, errcheck, staticcheck
- After editing: `make lint-go-incremental` (only checks changes since branching from main)
- Before committing: `make lint-go` (full lint)

View file

@ -0,0 +1,40 @@
---
paths:
- "orbit/**/*.go"
---
# Fleet Orbit conventions
Orbit is Fleet's lightweight agent that manages osquery, handles updates, and provides device-level functionality. It runs on end-user devices, so reliability and security are critical.
## Architecture
- **Entry point**: `orbit/cmd/orbit/` — main binary
- **Packages**: `orbit/pkg/` — modular packages for each concern
- **Update system**: `orbit/pkg/update/` — TUF-based auto-update for osquery, orbit, and desktop
- **Packaging**: `orbit/pkg/packaging/` — builds installers for macOS (.pkg), Windows (.msi), and Linux (.deb/.rpm)
- **Platform-specific code**: use build tags (`_darwin.go`, `_windows.go`, `_linux.go`) and `_stub.go` for unsupported platforms
## Key patterns
- **Keystore**: `orbit/pkg/keystore/` — platform-specific secure key storage (macOS Keychain, Windows DPAPI, Linux file-based). Always use the keystore abstraction, never raw file I/O for secrets.
- **osquery management**: `orbit/pkg/osquery/` — launching, monitoring, and communicating with osquery. Orbit owns the osquery lifecycle.
- **Token management**: `orbit/pkg/token/` — orbit enrollment token read/write with file locking
- **Platform executables**: `orbit/pkg/execuser/` — run commands as the logged-in user (not root). Critical for UI prompts and desktop app.
## Security considerations
- Orbit runs as root/SYSTEM — every input must be validated
- Never log enrollment tokens, orbit keys, or device identifiers at info level
- File operations on device should use restrictive permissions (0600/0700)
- TUF update verification must never be bypassed
- Use `orbit/pkg/insecure/` only for intentionally insecure test configurations
## Testing
- Unit tests don't need special env vars (no MySQL/Redis)
- Platform-specific tests may need build tags: `go test -tags darwin ./orbit/pkg/...`
- Use `_stub.go` files for cross-platform test compatibility
- Packaging tests may require signing certificates or specific tools (notarytool, WiX)
## Build and packaging
- macOS: `.pkg` built with `pkgbuild`, optional notarization via `notarytool` or `rcodesign`
- Windows: `.msi` built with WiX toolset, templates in `orbit/pkg/packaging/windows_templates.go`
- Linux: `.deb` and `.rpm` via `nfpm`
- Cross-compilation: orbit supports `GOOS`/`GOARCH` targeting

View file

@ -1,4 +1,8 @@
{
"attribution": {
"commit": "",
"pr": ""
},
"env": {
"MYSQL_TEST": "1",
"REDIS_TEST": "1"
@ -7,13 +11,76 @@
"allow": [
"Read(~/.fleet/claude-projects/**)",
"Write(~/.fleet/claude-projects/**)",
"Edit(~/.fleet/claude-projects/**)"
"Edit(~/.fleet/claude-projects/**)",
"Bash(go test*)",
"Bash(go vet*)",
"Bash(go build*)",
"Bash(go fmt*)",
"Bash(gofmt*)",
"Bash(golangci-lint *)",
"Bash(MYSQL_TEST=1 go test*)",
"Bash(MYSQL_TEST=1 REDIS_TEST=1 go test*)",
"Bash(FLEET_INTEGRATION_TESTS_DISABLE_LOG=1 *)",
"Bash(yarn test*)",
"Bash(yarn lint*)",
"Bash(npx prettier*)",
"Bash(npx eslint*)",
"Bash(npx tsc*)",
"Bash(npx jest*)",
"Bash(make test*)",
"Bash(make lint*)",
"Bash(make build*)",
"Bash(make mock*)",
"Bash(make generate*)",
"Bash(make serve*)",
"Bash(make up*)",
"Bash(make db-*)",
"Bash(make migration*)",
"Bash(make deps*)",
"Bash(make e2e-*)",
"Bash(make run-go-tests*)",
"Bash(make fleet-dev*)",
"Bash(make fleetctl-dev*)",
"Bash(make clean*)",
"Bash(make doc*)",
"Bash(make dump-test-schema*)",
"Bash(make analyze-go*)",
"Bash(make update-go*)",
"Bash(make check-go*)",
"Bash(git status*)",
"Bash(git diff*)",
"Bash(git log*)",
"Bash(git show*)",
"Bash(git branch*)",
"Bash(gh pr *)",
"Bash(gh issue *)",
"Bash(gh run *)",
"Bash(gh api *)"
],
"deny": [
"Bash(git push --force*)",
"Bash(git push -f*)",
"Bash(rm -rf /*)",
"Bash(rm -rf ~*)"
]
},
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "\"$CLAUDE_PROJECT_DIR\"/.claude/hooks/guard-dangerous-commands.sh",
"timeout": 5
}
]
}
],
"PostToolUse": [
{
"matcher": "Edit|Write",
"if": "Edit(**/*.go) || Write(**/*.go)",
"hooks": [
{
"type": "command",
@ -21,6 +88,28 @@
"timeout": 10
}
]
},
{
"matcher": "Edit|Write",
"if": "Edit(frontend/**) || Write(frontend/**)",
"hooks": [
{
"type": "command",
"command": "\"$CLAUDE_PROJECT_DIR\"/.claude/hooks/prettier-frontend.sh",
"timeout": 10
}
]
},
{
"matcher": "Edit|Write",
"if": "Edit(**/*.go) || Edit(**/*.ts) || Edit(**/*.tsx) || Write(**/*.go) || Write(**/*.ts) || Write(**/*.tsx)",
"hooks": [
{
"type": "command",
"command": "\"$CLAUDE_PROJECT_DIR\"/.claude/hooks/lint-on-save.sh",
"timeout": 60
}
]
}
]
}

View file

@ -0,0 +1,58 @@
---
name: bump-migration
description: Bump a database migration's timestamp to the current time. Required when a PR's migration is older than one already merged to main. Use when asked to "bump migration", "update migration timestamp", or when a migration ordering conflict is detected.
allowed-tools: Bash(go run *), Bash(make dump-test-schema*), Bash(git diff*), Bash(ls *), Read, Grep, Glob
model: sonnet
effort: medium
---
# Bump a database migration timestamp
Bump the migration: $ARGUMENTS
## When to use
This is required when a PR has a database migration with a timestamp older than a migration already merged to main. This happens when a PR has been pending merge for a while and another PR got merged with a more recent migration.
## Process
### 1. Identify the migration to bump
If the user provided a filename, use that. Otherwise, find migrations on this branch that are older than the latest on main:
```bash
# List migrations on this branch that aren't on main
git diff origin/main --name-only -- server/datastore/mysql/migrations/tables/
```
### 2. Run the bump tool
The tool lives at `tools/bump-migration/main.go`. Run it from the repo root:
```bash
go run tools/bump-migration/main.go --source-migration YYYYMMDDHHMMSS_MigrationName.go
```
This will:
- Rename the migration file with a new current timestamp
- Rename the test file (if it exists)
- Update all function names inside both files (`Up_OLDTS` → `Up_NEWTS`, `Down_OLDTS``Down_NEWTS`, `TestUp_OLDTS``TestUp_NEWTS`)
### 3. Optionally regenerate the schema
If the migration affects the schema, add `--regen-schema` to also run `make dump-test-schema`:
```bash
go run tools/bump-migration/main.go --source-migration YYYYMMDDHHMMSS_MigrationName.go --regen-schema
```
### 4. Verify
- Check that the old files are gone and new files exist with the updated timestamp
- Verify the function names inside the files match the new timestamp
- Run `go build ./server/datastore/mysql/migrations/...` to check compilation
## Rules
- Always run from the repo root
- Provide the migration filename, not the test filename
- The tool handles both the migration and its test file automatically

View file

@ -0,0 +1,77 @@
---
name: cherry-pick
description: Cherry-pick a merged PR into the current RC branch. Use when asked to "cherry-pick", "cp into RC", or after merging a PR that needs to go into the current release.
allowed-tools: Bash(git *), Bash(gh pr *), Bash(gh api *), Read, Grep, Glob
effort: low
---
Cherry-pick a merged PR into the current RC branch. Arguments: $ARGUMENTS
Usage: `/cherry-pick <PR_NUMBER> [RC_BRANCH]`
- `PR_NUMBER` (required): The PR number to cherry-pick (e.g. `43078`). If not provided, ask the user.
- `RC_BRANCH` (optional): The target RC branch name (e.g. `rc-minor-fleet-v4.83.0`). If not provided, auto-detect the most recent one.
## Step 1: Ensure main is up to date
1. `git fetch origin`
2. `git checkout main`
3. `git pull origin main`
## Step 2: Identify the RC branch
If an RC branch was provided as the second argument, use it (but still confirm with the user before proceeding).
Otherwise, auto-detect by listing both minor and patch RC branches:
```
git for-each-ref 'refs/remotes/origin/rc-minor-fleet-v*' 'refs/remotes/origin/rc-patch-fleet-v*' --format='%(refname:strip=3)' | grep -E '^rc-(minor|patch)-fleet-v[0-9]+\.[0-9]+\.[0-9]+$' | sort -V
```
From the results, suggest the most recent `rc-minor-fleet-v*` branch as the default. If patch branches also exist, mention them as alternatives. **Always ask the user to confirm the target RC branch before proceeding.**
## Step 3: Get the merge commit and GitHub username
1. Get the PR title:
```
gh pr view <PR_NUMBER> --json title --jq .title
```
2. Get the merge commit SHA:
```
gh pr view <PR_NUMBER> --json mergeCommit --jq .mergeCommit.oid
```
If this returns `null` or an empty value, the PR is not yet merged. Tell the user and stop.
3. Get the GitHub username: `gh api user --jq .login`
## Step 4: Cherry-pick onto a new branch
1. Create a new branch off the RC branch:
```
git checkout -b <github-username>/<short-description>-cp origin/<rc-branch>
```
Derive `<short-description>` from the PR title (lowercase, hyphens, keep it short — 3-5 words max).
2. Check whether the commit is a merge commit by inspecting its parents:
```
git rev-list --parents -n 1 <merge-commit-SHA>
```
If the commit has multiple parents, run:
```
git cherry-pick -m 1 <merge-commit-SHA>
```
Otherwise (squash-merged or rebased), run:
```
git cherry-pick <merge-commit-SHA>
```
3. If there are conflicts, stop and tell the user which files conflict. Do NOT attempt to resolve them automatically.
## Step 5: Push and open PR
1. Push the branch: `git push -u origin HEAD`
2. Open a PR targeting the RC branch (NOT main):
```
gh pr create --base <rc-branch> --title "Cherry-pick #<PR_NUMBER>: <original-title>" --body "$(cat <<'EOF'
Cherry-pick of #<PR_NUMBER> into the RC branch.
EOF
)"
```
3. Report the PR URL to the user.

View file

@ -1,3 +1,10 @@
---
name: find-related-tests
description: Find test files and functions related to recent git changes. Suggests exact go test commands with correct env vars.
allowed-tools: Bash(git *), Read, Grep, Glob
effort: low
---
Look at my recent git changes (`git diff` and `git diff --cached`) and find all related test files.
For each modified file, find:

View file

@ -1,3 +1,11 @@
---
name: fix-ci
description: Diagnose and fix failing CI tests from a GitHub Actions run. Use when asked to "fix CI", "CI failure", or "failing tests in CI".
allowed-tools: Bash(gh *), Bash(go test *), Bash(go build *), Bash(MYSQL_TEST*), Bash(MYSQL_TEST=1 REDIS_TEST=1 *), Bash(FLEET_INTEGRATION_TESTS_DISABLE_LOG=1 *), Read, Grep, Glob, Edit
model: opus
effort: high
---
Fix failing tests from a CI run. The argument is a GitHub Actions run URL or run ID: $ARGUMENTS
## Step 1: Identify failing jobs

View file

@ -0,0 +1,50 @@
---
name: fleet-gitops
description: Help with Fleet GitOps configuration files including queries, profiles, software, and DDM declarations with validation against upstream references.
allowed-tools: Read, Grep, Glob, Edit, Write, WebFetch, WebSearch
effort: high
---
You are helping with Fleet GitOps configuration files: $ARGUMENTS
Focus on the `it-and-security` folder. Apply the following constraints for all work in this session.
## Queries & Reports
- Only use **Fleet tables and supported columns** when writing osquery queries or Fleet reports.
- Do not reference tables or columns that are not present in the Fleet schema for the target platform.
- Validate table and column names against the Fleet schema before including them in a query:
- https://github.com/fleetdm/fleet/tree/main/schema
## Configuration Profiles
When generating or modifying configuration profiles:
- **First-party Apple payloads** (`.mobileconfig`) — validate payload keys, types, and allowed values against the Apple Device Management reference:
- https://github.com/apple/device-management/tree/release/mdm/profiles
- **Third-party Apple payloads** (`.mobileconfig`) — validate against the ProfileManifests community reference:
- https://github.com/ProfileManifests/ProfileManifests
- **Windows CSPs** (`.xml`) — validate CSP paths, formats, and allowed values against Microsoft's MDM protocol reference:
- https://learn.microsoft.com/en-us/windows/client-management/mdm/
- **Android profiles** (`.json`) — validate keys and values against the Android Management API `enterprises.policies` reference:
- https://developers.google.com/android/management/reference/rest/v1/enterprises.policies
## Software
- When adding software for macOS or Windows hosts, **always check the Fleet-maintained app catalog first** before using a custom package:
- https://github.com/fleetdm/fleet/tree/main/ee/maintained-apps
- In GitOps YAML, use the `fleet_maintained_apps` key with the app's `slug` to reference a Fleet-maintained app.
- When remediating a CVE, use Fleet's built-in vulnerability detection to identify affected software, then follow the Software section above to deploy a fix — preferring a Fleet-maintained app update where available, otherwise a custom package.
## Declarative Device Management (DDM)
When generating or modifying DDM declarations:
- Validate declaration types, keys, and values against the Apple DDM reference:
- https://github.com/apple/device-management/tree/release/declarative/declarations
- Ensure the `Type` identifier matches a supported declaration type from the reference.
## References
- Fleet GitOps documentation: https://fleetdm.com/docs/configuration/yaml-files
- Fleet API documentation: https://fleetdm.com/docs/rest-api/rest-api

View file

@ -0,0 +1,69 @@
---
name: lint
description: Run linters on recently changed files with the correct tools for each language. Use when asked to "lint", "check style", or "run linters".
allowed-tools: Bash(make lint*), Bash(golangci-lint *), Bash(go vet*), Bash(yarn lint*), Bash(yarn --cwd *), Bash(npx eslint*), Bash(npx prettier*), Bash(git diff*), Bash(git status*), Read, Grep, Glob
effort: low
---
# Lint recent changes
Run the appropriate linters on files changed in the current branch. Use the project's own make targets when available.
## Process
### 1. Detect changed files
Find recently changed files (last commit, staged, and unstaged):
```bash
git diff --name-only HEAD~1 # Last commit
git diff --name-only --cached # Staged but not committed
git diff --name-only # Unstaged changes
```
Combine all three and deduplicate to get the full set.
### 2. Run linters by language
**Go files** (`*.go`):
Use the project's incremental linter — it only checks changes since branching from main:
```bash
make lint-go-incremental
```
This uses `.golangci-incremental.yml` with `--new-from-merge-base=origin/main`. It's faster and more relevant than linting entire packages.
For a full lint (e.g., before committing), use:
```bash
make lint-go
```
**TypeScript/JavaScript files** (`*.ts`, `*.tsx`, `*.js`, `*.jsx`):
```bash
npx eslint frontend/path/to/changed/files
npx prettier --check frontend/path/to/changed/files
```
Or use the make target:
```bash
make lint-js
```
**SCSS files** (`*.scss`):
```bash
npx prettier --check frontend/path/to/changed/files.scss
```
### 3. Report results
For each linter run, show:
- Which packages/files were linted
- Any errors or warnings found
- Suggested fixes (if the linter provides them)
If everything passes, confirm which linters ran and on which files.
If an argument is provided, use it to filter: $ARGUMENTS
- `go` — only Go linters (uses `make lint-go-incremental`)
- `full` — full Go lint (uses `make lint-go`)
- `js` or `frontend` — only frontend linters (uses `make lint-js`)
- A file path — lint that specific file/package

View file

@ -0,0 +1,82 @@
---
name: new-endpoint
description: Scaffold a new Fleet API endpoint with request/response structs, endpoint function, service method, datastore interface, handler registration, and test stubs.
allowed-tools: Read, Write, Edit, Grep, Glob
model: sonnet
effort: high
disable-model-invocation: true
---
# Scaffold a New Fleet API Endpoint
Create a new API endpoint for: $ARGUMENTS
## Process
### 1. Gather Requirements
- Resource name and HTTP method (GET/POST/PATCH/DELETE)
- URL path (e.g., `/api/_version_/fleet/resource`)
- Request body fields (if any)
- Response body fields
- Which API version (use `StartingAtVersion("2022-04")` for new endpoints)
- Does it need a datastore method?
### 2. Read Reference Patterns
Read `server/service/vulnerabilities.go` for the canonical request/response/endpoint pattern:
- Request struct with json tags
- Response struct with `Err error` field and `Error()` method
- Endpoint function with `(ctx, request, svc)` signature
Read `server/service/handler.go` to find where to register the new endpoint.
### 3. Create Request/Response Structs
```go
type myResourceRequest struct {
ID uint `url:"id"`
Name string `json:"name"`
}
type myResourceResponse struct {
Resource *fleet.Resource `json:"resource,omitempty"`
Err error `json:"error,omitempty"`
}
func (r myResourceResponse) Error() error { return r.Err }
```
### 4. Create Endpoint Function
```go
func myResourceEndpoint(ctx context.Context, request interface{}, svc fleet.Service) (fleet.Errorer, error) {
req := request.(*myResourceRequest)
result, err := svc.MyResource(ctx, req.ID)
if err != nil {
return myResourceResponse{Err: err}, nil
}
return myResourceResponse{Resource: result}, nil
}
```
### 5. Add Service Interface Method
In `server/fleet/service.go`, add the method to the `Service` interface.
### 6. Implement Service Method
In the appropriate `server/service/*.go` file:
- Start with `svc.authz.Authorize(ctx, &fleet.Entity{}, fleet.ActionRead)`
- Implement business logic
- Wrap errors with `ctxerr.Wrap`
### 7. Add Datastore Interface Method (if needed)
In `server/fleet/datastore.go`, add the method to the `Datastore` interface.
### 8. Register in handler.go
```go
ue.StartingAtVersion("2022-04").GET("/api/_version_/fleet/resource", myResourceEndpoint, myResourceRequest{})
```
### 9. Create Test Stubs
- Unit test with mock datastore in `server/service/*_test.go`
- Integration test stub if it touches the database
### 10. Verify
- Run `go build ./...` to check compilation
- Run `go test ./server/service/` to check mocks are satisfied

View file

@ -0,0 +1,78 @@
---
name: new-migration
description: Create a new Fleet database migration with timestamp naming, Up function, init registration, and test file.
allowed-tools: Bash(date *), Bash(make migration *), Bash(go build *), Bash(go test *), Bash(MYSQL_TEST*), Read, Write, Grep, Glob
model: sonnet
effort: medium
---
# Create a New Database Migration
Create a migration for: $ARGUMENTS
## Process
### 1. Generate Timestamp and Name
Use `make migration name=CamelCaseName` if available, or generate manually:
```bash
date +%Y%m%d%H%M%S
```
The migration name should be descriptive CamelCase (e.g., `AddRecoveryLockAutoRotateAt`, `CreateTableSoftwareInstallers`).
### 2. Create Migration File
Location: `server/datastore/mysql/migrations/tables/{TIMESTAMP}_{Name}.go`
```go
package tables
import "database/sql"
func init() {
MigrationClient.AddMigration(Up_{TIMESTAMP}, Down_{TIMESTAMP})
}
func Up_{TIMESTAMP}(tx *sql.Tx) error {
_, err := tx.Exec(`
-- SQL statement here
`)
return err
}
func Down_{TIMESTAMP}(tx *sql.Tx) error {
return nil
}
```
### 3. Create Test File
Location: `server/datastore/mysql/migrations/tables/{TIMESTAMP}_{Name}_test.go`
```go
package tables
import (
"testing"
"github.com/stretchr/testify/require"
)
func TestUp_{TIMESTAMP}(t *testing.T) {
db := applyUpToPrev(t)
// Set up test data before migration if needed
applyNext(t, db)
// Verify migration applied correctly
// e.g., check table exists, columns added, data migrated
}
```
### 4. Verify
- Run `go build ./server/datastore/mysql/migrations/...` to check compilation
- Run `MYSQL_TEST=1 go test -run TestUp_{TIMESTAMP} ./server/datastore/mysql/migrations/tables/` to test the migration
## Rules
- Every migration MUST have a test file
- Down migrations are always no-ops (`return nil`) — Fleet doesn't use rollback migrations
- Never modify existing migration files — create new ones
- Data migrations go in the `data/` subdirectory

View file

@ -0,0 +1,58 @@
---
name: project
description: Load or initialize a Fleet workstream project context. Use when asked to "load project" or "switch project".
context: fork
allowed-tools: Read, Write, Glob, Grep, Bash(ls *), Bash(pwd *)
effort: medium
---
# Load a workstream project context
## Detect the project directory
Find the Claude Code auto-memory directory for this project. It's based on the working directory path:
1. Run `pwd` to get the current directory.
2. Construct the memory path: `~/.claude/projects/` + the cwd with `/` replaced by `-` and leading `-` (e.g., `/Users/alice/Source/github.com/fleetdm/fleet``~/.claude/projects/-Users-alice-Source-github-com-fleetdm-fleet/memory/`).
3. Verify the directory exists. If not, tell the user and stop.
Use this as the base for all reads and writes below.
## Load the project
Look for a workstream context file named `$ARGUMENTS.md` in the memory directory. This contains background, decisions, and conventions for a specific workstream within Fleet.
If the project context file was found, give a brief summary of what you know and ask what we're working on today.
If the project context file doesn't exist:
1. Tell the user no project named "$ARGUMENTS" was found.
2. List any existing `.md` files in the memory directory so they can see what's available.
3. Ask if they'd like to initialize a new project with that name.
4. If they don't want to initialize, stop here.
5. If they do, ask them to brain-dump everything they know about the workstream — the goal, what areas of the codebase it touches, key decisions, gotchas, anything they've been repeating at the start of each session. A sentence is fine, a paragraph is better. Also offer: "I can also scan your recent session transcripts for relevant context — would you like me to look back through recent chats?"
6. If they want you to scan prior sessions, look at the JSONL transcript files in the Claude project directory (the parent of the memory directory). Read recent ones (last 5-10), skimming for messages related to the workstream. These are large files, so read selectively — check the first few hundred lines of each to gauge relevance before reading more deeply.
7. Using their description, any prior session context, and codebase exploration, find relevant files, patterns, types, and existing implementations related to the workstream.
8. Create the project file in the memory directory using this structure:
```markdown
# Project: $ARGUMENTS
## Background
<!-- What is this workstream about, in the user's words + what you learned -->
## How it works
<!-- Key mechanisms, patterns, and code flow you discovered -->
## Key files
<!-- Important file paths for this workstream, with brief descriptions -->
## Key decisions
<!-- Important architectural or design decisions -->
## Status
<!-- What's done, what remains -->
```
9. Show the user what you wrote and ask if they'd like to adjust anything before continuing.
As you work on a project, update the project file with useful discoveries — gotchas, important file paths, patterns — but not session-specific details.

View file

@ -1,3 +1,12 @@
---
name: review-pr
description: Review a Fleet pull request for correctness, Go idioms, SQL safety, test coverage, and conventions. Use when asked to "review PR" or "review pull request".
context: fork
allowed-tools: Bash(gh *), Read, Grep, Glob
model: opus
effort: high
---
Review the pull request: $ARGUMENTS
Use `gh pr view` and `gh pr diff` to get the full context.

View file

@ -0,0 +1,99 @@
---
name: spec-story
description: Break down a Fleet GitHub story issue into implementable sub-issues with technical specs. Use when asked to "spec", "break down", or "analyze" a story or issue.
allowed-tools: Bash(gh *), Read, Grep, Glob, Write, Edit, WebFetch(domain:github.com), WebFetch(domain:fleetdm.com), WebSearch
model: opus
effort: high
argument-hint: "<issue-number-or-url>"
---
# Spec a Fleet Story
Break down the GitHub story into implementable sub-issues: $ARGUMENTS
## Process
### 1. Understand the Story
- Fetch the issue with `gh issue view <number> --json title,body,labels,milestone,assignees`
- Read the full description, acceptance criteria, and any linked issues
- Identify the user-facing goal and success criteria
- If the issue references Figma designs, API docs, or external specs, fetch them
### 2. Map the Codebase Impact
Search the codebase to understand what exists and what needs to change:
- Find existing implementations of related features (Grep for key terms)
- Identify the tables, service methods, API endpoints, and frontend pages involved
- Check migration files and `server/fleet/datastore.go` for relevant schema
- Trace the request flow: API endpoint → service method → datastore → frontend
### 3. Identify Sub-Issues
Decompose into atomic, implementable units. Each sub-issue should be:
- Completable independently (or with clearly stated dependencies)
- Testable with specific acceptance criteria
- Scoped to one layer when possible (backend, frontend, or migration)
Common decomposition patterns for Fleet:
- **Database migration** — new tables or columns needed
- **Datastore methods** — new or modified query functions
- **Service layer** — business logic, authorization, validation
- **API endpoint** — new or modified HTTP endpoints
- **Frontend page/component** — UI changes
- **fleetctl/GitOps** — CLI and GitOps YAML support
- **Tests** — integration test coverage for the feature
- **Documentation** — REST API docs, user-facing docs
### 4. Write Each Sub-Issue Spec
For each sub-issue, write:
```markdown
## Sub-issue N: [Title]
**Depends on:** [sub-issue numbers, or "none"]
**Layer:** [migration | datastore | service | API | frontend | CLI | docs | tests]
**Estimated scope:** [small: <2h | medium: 2-8h | large: >8h]
### What
[1-3 sentences describing the change]
### Why
[How this contributes to the parent story's goal]
### Technical Approach
- [Specific files to create or modify]
- [Key functions, types, or patterns to follow]
- [Reference existing similar implementations]
### Acceptance Criteria
- [ ] [Testable criterion 1]
- [ ] [Testable criterion 2]
- [ ] [Tests pass: specific test commands]
### Open Questions
- [Any ambiguity that needs product/design input]
```
### 5. Produce the Dependency Graph
Show which sub-issues depend on which:
```
Migration → Datastore → Service → API → Frontend
→ CLI/GitOps
→ Docs
```
Note which sub-issues can be parallelized.
### 6. Write the Output
Create a spec document with:
1. **Summary** — one paragraph overview
2. **Sub-issues** — each with the template above
3. **Dependency graph** — visual ordering
4. **Open questions** — anything that needs clarification before implementation begins
5. **Suggested PR strategy** — single PR vs multiple, review order
## Rules
- Every sub-issue must reference specific files and patterns from the codebase
- No vague specs: "implement the backend" is not a sub-issue
- If you find ambiguity in the story, flag it as an open question rather than guessing
- Check for related existing issues with `gh issue list --search "keyword" --limit 10`
- Consider Fleet's multi-platform nature: does this affect macOS, Windows, Linux, iOS, Android?
- Consider enterprise vs core: does this need license checks?

View file

@ -0,0 +1,31 @@
---
name: test
description: Run tests related to recent changes with appropriate tools and environment variables. Use when asked to "run tests", "test my changes", or "test this".
allowed-tools: Bash(go test *), Bash(MYSQL_TEST*), Bash(MYSQL_TEST=1 *), Bash(MYSQL_TEST=1 REDIS_TEST=1 *), Bash(FLEET_INTEGRATION_TESTS_DISABLE_LOG=1 *), Bash(yarn test*), Bash(npx jest*), Bash(git diff*), Bash(git status*), Read, Grep, Glob
effort: low
---
Run tests related to my recent changes. Look at `git diff` and `git diff --cached` to determine which files were modified.
## Go tests
For each modified Go package, run the tests with appropriate env vars:
- If the package is under `server/datastore/mysql`: use `MYSQL_TEST=1`
- If the package is under `server/service`: use `MYSQL_TEST=1 REDIS_TEST=1`
- Otherwise: run without special env vars
## Frontend tests
If any files under `frontend/` were modified, run the relevant frontend tests:
- Find test files matching the changed components (e.g., `ComponentName.tests.tsx`)
- Run with: `yarn test --testPathPattern "path/to/changed/component"`
- If many files changed, run the full suite: `yarn test`
## Choosing what to run
- If only Go files changed, run Go tests only
- If only frontend files changed, run frontend tests only
- If both changed, run both
- If an argument is provided, use it as a filter: $ARGUMENTS (passed as `-run` for Go or `--testPathPattern` for frontend)
Show a summary of results: which packages/suites passed, which failed, and any failure details.

View file

@ -17,7 +17,9 @@ reviews:
suggested_labels: false
suggested_reviewers: false
auto_review:
enabled: false
enabled: true
path_filters:
- "!**/*.md" # Don't weigh in on docs changes at this time
path_instructions:
- path: "**/*.go"
instructions: "When reviewing SQL queries that are added or modified, ensure that appropriate filtering criteria are applied—especially when a query is intended to return data for a specific entity (e.g., a single host). Check for missing WHERE clauses or incorrect filtering that could lead to incorrect or non-deterministic results (e.g., returning the first row instead of the correct one). Flag any queries that may return unintended results due to lack of precise scoping. Review all SQL queries for possible SQL injection."

View file

@ -1,8 +1,11 @@
# This configures how golangci-lint builds a custom build, wich is necessary to use nilaway as a plugin per https://github.com/uber-go/nilaway?tab=readme-ov-file#golangci-lint--v1570
# This has to be >= v1.57.0 for module plugin system support.
version: v2.7.1
version: v2.11.3
plugins:
- module: "go.uber.org/nilaway"
import: "go.uber.org/nilaway/cmd/gclplugin"
version: v0.0.0-20260126174828-99d94caaf043 # fixed version for reproducible builds - latest as of 2026-01-29
- module: "github.com/fleetdm/fleet/v4/tools/ci/setboolcheck"
import: "github.com/fleetdm/fleet/v4/tools/ci/setboolcheck/cmd/gclplugin"
path: "tools/ci/setboolcheck"

View file

@ -20,7 +20,7 @@ assignees: ''
TODO
### 🛠️ To fix
<!-- Add the expected fix here. If you're not sure, leave this blank for product to specify. -->
<!-- Add the expected fix here. If you're not sure, leave this blank for product to specify. If the Product Designer is unsure, add "TODO: Up to Tech Lead" and move the bug to "Ready to spec". -->
TODO
### 🧑‍💻  Steps to reproduce

View file

@ -12,14 +12,8 @@ assignees: 'xpkoala,andreykizimenko,chrstphr84,Brajim20'
# Important reference data
1. [fleetctl preview setup](https://fleetdm.com/fleetctl-preview)
2. [permissions documentation](https://fleetdm.com/docs/using-fleet/permissions)
3. premium tests require license key (needs renewal) `fleetctl preview --license-key=eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJGbGVldCBEZXZpY2UgTWFuYWdlbWVudCBJbmMuIiwiZXhwIjoxNjQwOTk1MjAwLCJzdWIiOiJkZXZlbG9wbWVudCIsImRldmljZXMiOjEwMCwibm90ZSI6ImZvciBkZXZlbG9wbWVudCBvbmx5IiwidGllciI6ImJhc2ljIiwiaWF0IjoxNjIyNDI2NTg2fQ.WmZ0kG4seW3IrNvULCHUPBSfFdqj38A_eiXdV_DFunMHechjHbkwtfkf1J6JQJoDyqn8raXpgbdhafDwv3rmDw`
4. premium tests require license key (active - Expires Sunday, January 1, 2023 12:00:00 AM) `fleetctl preview --license-key=eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJGbGVldCBEZXZpY2UgTWFuYWdlbWVudCBJbmMuIiwiZXhwIjoxNjcyNTMxMjAwLCJzdWIiOiJGbGVldCBEZXZpY2UgTWFuYWdlbWVudCIsImRldmljZXMiOjEwMCwibm90ZSI6ImZvciBkZXZlbG9wbWVudCBvbmx5IiwidGllciI6InByZW1pdW0iLCJpYXQiOjE2NDI1MjIxODF9.EGHQjIzM73YyMbnCruswzg360DEYCsDi9uz48YcDwQHq90BabGT5PIXRiculw79emGj5sk2aKgccTd2hU5J7Jw`
# Database migration tests
1. Create a [custom issue](https://github.com/fleetdm/confidential/issues/new?template=1-custom-request.md) tagged `:help-customers` in the confidential repo to run [cloud migration tests](https://github.com/fleetdm/confidential/actions/workflows/cloud-tests.yml) targeted off of the RC branch. Tests will be run off of [these environments](https://github.com/fleetdm/confidential/tree/main/infrastructure/cloud-tests).
2. Once tests are complete, if migration duration for any environment takes more than 5 seconds, check logs to determine whether any single migration took more than 5 seconds, or if the entire process took more than 15 seconds. If either is the case and there is not already a progress indicator for the migration that updates at least every ten seconds, file an unreleased bug triaged to the team that created the migration to audit the migration and evaluate if progress updates or performance improvements are needed.
2. [Permissions documentation](https://fleetdm.com/docs/using-fleet/permissions)
3. [Fleet free vs premium documentation](https://fleetdm.com/pricing)
# Smoke Tests
Smoke tests are limited to core functionality and serve as a pre-release final review. If smoke tests are failing, a release cannot proceed.
@ -32,10 +26,8 @@ Smoke tests are limited to core functionality and serve as a pre-release final r
### Prerequisites
1. `fleetctl preview` is set up and running the desired test version using [`--tag` parameters.](https://fleetdm.com/handbook/engineering#run-fleet-locally-for-qa-purposes)
2. Unless you are explicitly testing older browser versions, browser is up to date.
3. Certificate & flagfile are in place to create new host.
4. In your browser, clear local storage using devtools.
1. Local instance is running and up to date with the target release branch
2. In your browser, clear local storage using devtools.
### Orchestration
<table>
@ -103,7 +95,10 @@ Smoke tests are limited to core functionality and serve as a pre-release final r
Run basic checks for the product group area while using a Fleet Free license.
- Features documented as Free work normally
- Packs
- Gitops
- Premium features are correctly restricted or hidden
- IdP information
- No UI, API, or workflow errors occur when using Free-only functionality
Reference: https://fleetdm.com/pricing
@ -193,7 +188,12 @@ Perform a quick visual scan of the UI and confirm:
Run basic checks for the product group area while using a Fleet Free license.
- Features documented as Free work normally
- Host enrollment
- Apple, Windows, Android MDM
- Configuration profile delivery
- APNs Certificate renewal
- Premium features are correctly restricted or hidden
- Setup experience
- No UI, API, or workflow errors occur when using Free-only functionality
Reference: https://fleetdm.com/pricing
@ -256,23 +256,18 @@ Perform a quick visual scan of the UI and confirm:
7. Verify software installs display correctly in Activity feed.
</td><td>pass/fail</td></tr>
<tr><td>Migration Test</td><td>Verify Fleet can migrate to the next version with no issues.</td><td>
Using the github action https://github.com/fleetdm/fleet/actions/workflows/db-upgrade-test.yml
1. Using the most recent stable version of Fleet and `main`, click `Run workflow`
2. Enter the Docker tag of Fleet starting version, e.g. 'v4.64.2'
3. Enter the Docker tag of Fleet version to upgrade to, e.g. 'rc-minor-fleet-v4.65.0'
4. Click `Run workflow`.
5. Action should complete successfully.
</td><td>pass/fail</td></tr>
<tr><td>Fleet Free</td><td>Verify that product group features behave correctly on Fleet Free</td><td>
Run basic checks for the product group area while using a Fleet Free license.
- Features documented as Free work normally
- Host details page
- Reports (Add, edit, live report)
- Software inventory
- Scripts (Add, delete, run)
- My device page (Mac, Windows, Linux)
- Premium features are correctly restricted or hidden
- Add software
- No UI, API, or workflow errors occur when using Free-only functionality
Reference: https://fleetdm.com/pricing
@ -339,7 +334,13 @@ Perform a quick visual scan of the UI and confirm:
Run basic checks for the product group area while using a Fleet Free license.
- Features documented as Free work normally
- Vulnerability detection
- Individual CVE page
- Premium features are correctly restricted or hidden
- Disk encryption
- OS Updates
- Lock / Wipe
- Certificate authorities
- No UI, API, or workflow errors occur when using Free-only functionality
Reference: https://fleetdm.com/pricing
@ -361,21 +362,74 @@ Perform a quick visual scan of the UI and confirm:
### All Product Groups
<table>
<tr><th>Test name</th><th>Step instructions</th><th>Expected result</th><th>pass/fail</td></tr>
<tr><td>$Name</td><td>{what a tester should do}</td><td>{what a tester should see when they do that}</td><td>pass/fail</td></tr>
<tr><td>Release blockers</td><td>Verify there are no outstanding release blocking tickets.</td><td>
1. Check [this](https://github.com/fleetdm/fleet/labels/~release%20blocker) filter to view all open `~release blocker` tickets.
2. If any are found raise an alarm in the `#help-engineering` and `#g-mdm` (or `#g-endpoint-ops`) channels.
</td><td>pass/fail</td>
<tr><td>Load tests - minor releases only unless otherwise specified</td><td>Verify all load test metrics are within acceptable range on final build of RC.</td><td>
1. Check [this Google doc](https://docs.google.com/document/d/1V6QtFzcGDsLnn2PIvGin74DAxdAN_3likjxSssOMMQI/edit?tab=t.0#heading=h.15acjob4ji20) to review load test key metrics and checks.
2. After all expected changes have been merged to the RC branch, two load tests will need to be run - a new instance with no data, and a migrated instance.
3. For the new instance with no data, set up a load test environment using the RC branch and allow it at least 24hrs of run time.
4. For the migrated instance, set up a load test environment on the previous minor release branch. Once the environment has been set up and stabilized, follow the instructions in [Deploying code changes to fleet](https://github.com/fleetdm/fleet/blob/main/infrastructure/loadtesting/terraform/readme.md#deploying-code-changes-to-fleet) to migrate to the RC branch. Monitor the metrics post-migration to determine if any performance issues arise.
5. Record metrics in [this spreadsheet](https://docs.google.com/spreadsheets/d/1FOF0ykFVoZ7DJSTfrveip0olfyRQsY9oT1uXCCZmuKc/edit?usp=drive_link) for the two load test runs.
</td><td>pass/fail</td></tr>
<tr><th>Test name</th><th>Step instructions</th><th>Expected result</th><th>Pass/Fail</th></tr>
<tr>
<td>$Name</td>
<td>{what a tester should do}</td>
<td>{what a tester should see when they do that}</td>
<td>pass/fail</td>
</tr>
<tr>
<td>Release blockers</td>
<td>Verify there are no outstanding release blocking tickets.</td>
<td>
1. Check [this](https://github.com/fleetdm/fleet/labels/~release%20blocker) filter to view all open `~release blocker` tickets.
2. If any are found raise an alarm in the `#help-engineering` and `#g-mdm` (or `#g-endpoint-ops`) channels.
</td>
<td>pass/fail</td>
</tr>
<tr>
<td>Load tests - minor releases only unless otherwise specified</td>
<td>Verify all load test metrics are within acceptable range on final build of RC.</td>
<td>
1. Check [this Google doc](https://docs.google.com/document/d/1V6QtFzcGDsLnn2PIvGin74DAxdAN_3likjxSssOMMQI/edit?tab=t.0#heading=h.15acjob4ji20) to review load test key metrics and checks.
2. After all expected changes have been merged to the RC branch, two load tests will need to be run - a new instance with no data, and a migrated instance.
3. For the new instance with no data, set up a load test environment using the RC branch and allow it at least 24hrs of run time.
4. For the migrated instance, set up a load test environment on the previous minor release branch. Once the environment has been set up and stabilized, follow the instructions in [Deploying code changes to fleet](https://github.com/fleetdm/fleet/blob/main/infrastructure/loadtesting/terraform/readme.md#deploying-code-changes-to-fleet) to migrate to the RC branch. Monitor the metrics post-migration to determine if any performance issues arise.
5. Record metrics in [this spreadsheet](https://docs.google.com/spreadsheets/d/1FOF0ykFVoZ7DJSTfrveip0olfyRQsY9oT1uXCCZmuKc/edit?usp=drive_link) for the two load test runs.
</td>
<td>pass/fail</td>
</tr>
<tr>
<td>Migration Test</td>
<td>Verify Fleet can migrate to the next version with no issues.</td>
<td>
Using [this github action](https://github.com/fleetdm/fleet/actions/workflows/db-upgrade-test.yml)
1. Using the most recent stable version of Fleet and `main`, click `Run workflow`
2. Enter the Docker tag of Fleet starting version, e.g. `v4.64.2`
3. Enter the Docker tag of Fleet version to upgrade to, e.g. `rc-minor-fleet-v4.65.0`
4. Click `Run workflow`
5. Action should complete successfully
</td>
<td>pass/fail</td>
</tr>
<tr>
<td>Cloud migration tests</td>
<td>Verify Fleet can migrate when using real world data.</td>
<td>
Using [this github action](https://github.com/fleetdm/confidential/actions/workflows/cloud-tests.yml)
1. Enter `fleetdm/fleet:rc-minor-fleet-<version>` for `The image to test`
2. Select `all` for `Where will we deploy?`
3. Action should complete successfully and the total time for each instance shouldn't be drastically different from previous releases
</td>
<td>pass/fail</td>
</tr>
</table>
### Notes

28
.github/ISSUE_TEMPLATE/reliability.md vendored Normal file
View file

@ -0,0 +1,28 @@
---
name: 🔧 Reliability
about: Report a scaling, performance, or reliability issue, including post-mortem action items.
title: ''
labels: 'reliability,:help-engineering'
assignees: ''
---
## Problem
<!-- Describe the reliability, scaling, or performance issue. Include any relevant metrics, error rates, or incidents. -->
TODO
## Impact
<!-- How does this affect users or the system? Include severity, frequency, and blast radius. -->
TODO
## Proposed fix
<!-- Describe the proposed solution or mitigation. If unknown, leave blank for engineering to specify. -->
TODO
## Evidence
<!-- Link to any related incidents, post-mortem documents, dashboards, or logs. -->
N/A

View file

@ -26,7 +26,7 @@ It is [planned and ready](https://fleetdm.com/handbook/company/development-group
- [ ] CLI (fleetctl) usage changes: TODO <!-- Insert the link to the relevant Figma cover page. Put "No changes" if there are no changes to the CLI. -->
- [ ] YAML changes: TODO <!-- Specify changes in the YAML files doc page as a PR to the reference docs release branch following the guidelines in the handbook here: https://fleetdm.com/handbook/product-design#drafting Put "No changes" if there are no changes necessary. -->
- [ ] REST API changes: TODO <!-- Specify changes in the REST API doc page as a PR to reference docs release branch following the guidelines in the handbook here: https://fleetdm.com/handbook/product-design#drafting Put "No changes" if there are no changes necessary. Move this item to the engineering list below if engineering will design the API changes. -->
- [ ] Fleet's agent (fleetd) changes: TODO <!-- Specify changes to fleetd. If the change requires a new Fleet (server) version, consider specifying to only enable this change in new Fleet versions. Put "No changes" if there are no changes necessary. -->
- [ ] Fleet's agent (fleetd) changes: TODO <!-- Specify changes to fleetd. If the change requires a new Fleet (server) version, consider specifying to only enable this change in new Fleet versions. If there are new tables, specify changes in the schema/ folder as a PR to the reference docs release branch following the guidelines in the handbook here: https://fleetdm.com/handbook/product-design#drafting Put "No changes" if there are no changes necessary. -->
- [ ] Fleet server configuration changes: TODO <!-- Specify changes in the Fleet server configuration doc page as a PR to reference docs release branch following the guidelines in the handbook here: https://fleetdm.com/handbook/product-design#drafting File a :help-customers request and assign the SVP of Customer Success. Up to Customer Success to device if any changes to cloud environments is needed. Put "No changes" if there are no changes necessary. -->
- [ ] Exposed, public API endpoint changes: TODO <!-- Specify changes in the "Which API endpoints to expose to the public internet?" guide as a PR to reference docs release branch following the guidelines in the handbook here: https://fleetdm.com/handbook/product-design#drafting File a :help-customers request and assign the SVP of Customer Success. Up to Customer Success to device if any changes to cloud environments is needed. Put "No changes" if there are no changes necessary. -->
- [ ] fleetdm.com changes: TODO <!-- Does this story include changes to fleetdm.com? (e.g. new API endpoints) If yes, create a blank subtask with the #g-website label, assign @eashaw, and add @eashaw and @lukeheath to the next design review meeting. fleetdm.com changes are up to @eashaw -->

View file

@ -52,7 +52,6 @@
"integrity": "sha512-2BCOP7TN8M+gVDj7/ht3hsaO/B/n5oDbiAyyvnRlNOs+u1o+JWNYTQrmpuNp1/Wq2gcFrI01JAW+paEKDMx/CA==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@babel/code-frame": "^7.27.1",
"@babel/generator": "^7.28.3",
@ -1498,7 +1497,6 @@
"resolved": "https://registry.npmjs.org/@octokit/core/-/core-7.0.2.tgz",
"integrity": "sha512-ODsoD39Lq6vR6aBgvjTnA3nZGliknKboc9Gtxr7E4WDNqY24MxANKcuDQSF0jzapvGb3KWOEDrKfve4HoWGK+g==",
"license": "MIT",
"peer": true,
"dependencies": {
"@octokit/auth-token": "^6.0.0",
"@octokit/graphql": "^9.0.1",
@ -2222,7 +2220,6 @@
"integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==",
"dev": true,
"license": "MIT",
"peer": true,
"bin": {
"acorn": "bin/acorn"
},
@ -2564,7 +2561,6 @@
}
],
"license": "MIT",
"peer": true,
"dependencies": {
"baseline-browser-mapping": "^2.8.9",
"caniuse-lite": "^1.0.30001746",
@ -3114,7 +3110,6 @@
"integrity": "sha512-XyLmROnACWqSxiGYArdef1fItQd47weqB7iwtfr9JHwRrqIXZdcFMvvEcL9xHCmL0SNsOvF0c42lWyM1U5dgig==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@eslint-community/eslint-utils": "^4.8.0",
"@eslint-community/regexpp": "^4.12.1",
@ -3497,9 +3492,9 @@
}
},
"node_modules/flatted": {
"version": "3.3.3",
"resolved": "https://registry.npmjs.org/flatted/-/flatted-3.3.3.tgz",
"integrity": "sha512-GX+ysw4PBCz0PzosHDepZGANEuFCMLrnRTiEy9McGjmkCQYwRq4A/X786G/fjM/+OjsWSU1ZrY5qyARZmO/uwg==",
"version": "3.4.2",
"resolved": "https://registry.npmjs.org/flatted/-/flatted-3.4.2.tgz",
"integrity": "sha512-PjDse7RzhcPkIJwy5t7KPWQSZ9cAbzQXcafsetQoD7sOJRQlGikNbx7yZp2OotDnJyrDcbyRq3Ttb18iYOqkxA==",
"dev": true,
"license": "ISC"
},
@ -4680,9 +4675,9 @@
}
},
"node_modules/jest-util/node_modules/picomatch": {
"version": "4.0.3",
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.3.tgz",
"integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==",
"version": "4.0.4",
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.4.tgz",
"integrity": "sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A==",
"dev": true,
"license": "MIT",
"engines": {
@ -5389,9 +5384,9 @@
"license": "ISC"
},
"node_modules/picomatch": {
"version": "2.3.1",
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.1.tgz",
"integrity": "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==",
"version": "2.3.2",
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.2.tgz",
"integrity": "sha512-V7+vQEJ06Z+c5tSye8S+nHUfI51xoXIXjHQ99cQtKUkQqqO1kO/KCJUfZXuB47h/YBlDhah2H3hdUGXn8ie0oA==",
"dev": true,
"license": "MIT",
"engines": {

View file

@ -8,7 +8,8 @@ If some of the following don't apply, delete the relevant line.
- [ ] Changes file added for user-visible changes in `changes/`, `orbit/changes/` or `ee/fleetd-chrome/changes`.
See [Changes files](https://github.com/fleetdm/fleet/blob/main/docs/Contributing/guides/committing-changes.md#changes-files) for more information.
- [ ] Input data is properly validated, `SELECT *` is avoided, SQL injection is prevented (using placeholders for values in statements), JS inline code is prevented especially for url redirects
- [ ] Input data is properly validated, `SELECT *` is avoided, SQL injection is prevented (using placeholders for values in statements), JS inline code is prevented especially for url redirects, and untrusted data interpolated into shell scripts/commands is validated against shell metacharacters.
- [ ] Timeouts are implemented and retries are limited to avoid infinite loops
- [ ] If paths of existing endpoints are modified without backwards compatibility, checked the frontend/CLI for any necessary changes
## Testing

View file

@ -54,56 +54,115 @@ jobs:
// Parse Fleet version from issue body
const body = issue.body || '';
const versionMatch = body.match(/\*\*Fleet version\*\*:\s*(.+)/);
const versionMatch = body.match(/\*\*Fleet versions?\*\*:\s*(.+)/i);
// Also check for Orbit/Fleetd version (case insensitive)
const orbitMatch = body.match(/\*\*(?:Orbit|Fleetd) versions?\*\*:\s*(.+)/i);
if (!versionMatch || !versionMatch[1]) {
console.log('No Fleet version found in issue body');
await tagAsUnreleased();
return;
// If no Fleet version but has Orbit/Fleetd version, check that instead
if (orbitMatch && orbitMatch[1]) {
console.log('Found Orbit/Fleetd version, will check that instead');
} else {
await tagAsUnreleased();
return;
}
}
// Extract version, removing any HTML comments
let reportedVersion = versionMatch[1].trim();
let reportedVersion = versionMatch ? versionMatch[1].trim() : '';
let orbitVersion = orbitMatch ? orbitMatch[1].trim() : '';
// Remove HTML comment if present (e.g., "4.62.0 <!-- comment -->")
reportedVersion = reportedVersion.replace(/\s*<!--.*?-->\s*/g, '').trim();
orbitVersion = orbitVersion.replace(/\s*<!--.*?-->\s*/g, '').trim();
console.log(`Found reported version: ${reportedVersion}`);
if (orbitVersion) {
console.log(`Found Orbit/Fleetd version: ${orbitVersion}`);
}
// Treat as unreleased if reported version is RC/main/unknown/todo
if (!reportedVersion ||
reportedVersion.trim() === '' ||
reportedVersion.toLowerCase().includes('todo') ||
reportedVersion.toLowerCase().includes('unknown') ||
reportedVersion.toLowerCase().includes('main') ||
reportedVersion.toLowerCase().includes('rc')) {
// Check both Fleet version and Orbit/Fleetd version if present
const versionsToCheck = [];
if (reportedVersion &&
reportedVersion.trim() !== '' &&
!reportedVersion.toLowerCase().includes('todo') &&
!reportedVersion.toLowerCase().includes('unknown') &&
!reportedVersion.toLowerCase().includes('main') &&
!reportedVersion.toLowerCase().includes('rc') &&
reportedVersion !== '4.x') {
versionsToCheck.push({ version: reportedVersion, type: 'fleet' });
}
if (orbitVersion &&
orbitVersion.trim() !== '' &&
!orbitVersion.toLowerCase().includes('todo') &&
!orbitVersion.toLowerCase().includes('unknown') &&
!orbitVersion.toLowerCase().includes('main') &&
!orbitVersion.toLowerCase().includes('rc')) {
versionsToCheck.push({ version: orbitVersion, type: 'orbit' });
}
// Special case: "4.x" means all 4.x versions, which is released
if (reportedVersion === '4.x') {
return;
}
// If no valid versions to check, tag as unreleased
if (versionsToCheck.length === 0) {
await tagAsUnreleased();
return;
}
if (reportedVersion === '4.x') {
return; // this is "all 4.x versions" so it's released
// Determine what we need to fetch based on versions present
const needsFleetReleases = versionsToCheck.some(v => v.type === 'fleet');
const needsOrbitTags = versionsToCheck.some(v => v.type === 'orbit');
// Fetch Fleet releases only if we have a Fleet version to check
let releasedFleetVersions = [];
if (needsFleetReleases) {
const allReleases = await github.paginate(github.rest.repos.listReleases, {
owner: "fleetdm",
repo: "fleet",
per_page: 100
});
// Extract version numbers from Fleet releases
// Fleet releases are tagged as "fleet-v4.X.X" or similar
releasedFleetVersions = allReleases
.map(release => {
// Try to extract from name
const nameMatch = release.name?.match(/(\d+\.\d+\.\d+)/);
if (nameMatch) return nameMatch[1];
return null;
})
.filter(v => v !== null);
}
// Fetch most recent 100 releases from the repo; that's realistically enough to match
// any newly created bug
const { data: allReleases } = await github.rest.repos.listReleases({
owner: "fleetdm",
repo: "fleet",
per_page: 100,
page: 1
});
// Fetch tags only if we have an orbit/fleetd version to check
let releasedOrbitVersions = [];
if (needsOrbitTags) {
const allTags = await github.paginate(github.rest.repos.listTags, {
owner: "fleetdm",
repo: "fleet",
per_page: 100
});
// Extract version numbers from releases
// Fleet releases are tagged as "fleet-v4.X.X" or similar
const releasedVersions = allReleases
.map(release => {
// Try to extract from name
const nameMatch = release.name?.match(/(\d+\.\d+\.\d+)/);
if (nameMatch) return nameMatch[1];
return null;
})
.filter(v => v !== null);
// Extract orbit/fleetd versions from tags
// Orbit tags are like "orbit-v1.X.X"
releasedOrbitVersions = allTags
.filter(tag => tag.name.match(/^orbit-v\d+\.\d+\.\d+$/))
.map(tag => {
const match = tag.name.match(/^orbit-v(\d+\.\d+\.\d+)$/);
return match ? match[1] : null;
})
.filter(v => v !== null);
}
// Normalize version for comparison
// Remove common prefixes/suffixes and extract core version number
@ -111,26 +170,36 @@ jobs:
// First try to extract x.y.z pattern
let match = version.match(/v?(\d+\.\d+\.\d+)/);
if (match) return match[1];
// If no patch version, try x.y pattern and add .0
match = version.match(/v?(\d+\.\d+)(?!\.\d)/);
if (match) return match[1] + '.0';
return version;
};
// Split version string on "&" to handle multiple versions (e.g., "4.60 & 4.61")
const reportedVersions = reportedVersion.split('&').map(v => v.trim());
// Check if ANY of the reported versions matches any released version
// Check if ANY of the reported versions is released
let isReleased = false;
for (const version of reportedVersions) {
const normalizedVersion = normalizeVersion(version);
if (releasedVersions.some(releasedVer => releasedVer === normalizedVersion)) {
console.log(`Found released version: ${normalizedVersion}`);
isReleased = true;
break;
for (const versionInfo of versionsToCheck) {
const { version, type } = versionInfo;
// Split version string on "&" to handle multiple versions (e.g., "4.60 & 4.61")
const versions = version.split('&').map(v => v.trim());
for (const v of versions) {
const normalizedVersion = normalizeVersion(v);
// Check against the appropriate list based on type
const releasedVersions = type === 'orbit' ? releasedOrbitVersions : releasedFleetVersions;
if (releasedVersions.some(releasedVer => releasedVer === normalizedVersion)) {
console.log(`Found released ${type} version: ${normalizedVersion}`);
isReleased = true;
break;
}
}
if (isReleased) break;
}
if (isReleased) {

View file

@ -33,7 +33,7 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'
@ -55,24 +55,6 @@ jobs:
restore-keys: |
${{ runner.os }}-node_modules-${{ hashFiles('**/yarn.lock') }}
- name: Go Cache
id: go-cache
uses: actions/cache@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
with:
# In order:
# * Module download cache
# * Build cache (Linux)
# * Build cache (Mac)
# * Build cache (Windows)
path: |
~/go/pkg/mod
~/.cache/go-build
~/Library/Caches/go-build
%LocalAppData%\go-build
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Install JS Dependencies
if: steps.js-cache.outputs.cache-hit != 'true'
run: make deps-js

View file

@ -24,7 +24,7 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: "go.mod"

View file

@ -38,8 +38,14 @@ jobs:
- name: Checkout
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Login to Docker Hub
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_ACCESS_TOKEN }}
- name: Set up Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: "go.mod"

View file

@ -57,7 +57,7 @@ jobs:
rm certificate.p12
- name: Set up Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'

View file

@ -40,7 +40,7 @@ jobs:
uses: actions/checkout@629c2de402a417ea7690ca6ce3f33229e27606a5 # v2
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'

View file

@ -38,8 +38,14 @@ jobs:
- name: Checkout
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Login to Docker Hub
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # v2.1.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_ACCESS_TOKEN }}
- name: Set up Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: "go.mod"

View file

@ -31,8 +31,14 @@ jobs:
- name: Checkout
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Login to Docker Hub
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # v2.1.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_ACCESS_TOKEN }}
- name: Set up Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: "go.mod"

View file

@ -38,8 +38,14 @@ jobs:
- name: Checkout
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Login to Docker Hub
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # v2.1.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_ACCESS_TOKEN }}
- name: Set up Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: "go.mod"

View file

@ -54,7 +54,7 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Set up Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'

View file

@ -34,7 +34,7 @@ jobs:
uses: actions/checkout@629c2de402a417ea7690ca6ce3f33229e27606a5 # v2
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'

View file

@ -0,0 +1,105 @@
name: Deploy Fleet agent downloader app to Heroku.
on:
push:
branches: [ main ]
paths:
- 'ee/fleet-agent-downloader/**'
permissions:
contents: read
jobs:
build:
permissions:
contents: read
if: ${{ github.repository == 'fleetdm/fleet' }}
runs-on: ubuntu-22.04
strategy:
matrix:
node-version: [20.x]
steps:
- name: Harden Runner
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
- uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
# Configure our access credentials for the Heroku CLI
- uses: akhileshns/heroku-deploy@e3eb99d45a8e2ec5dca08735e089607befa4bf28 # v3.14.15
with:
heroku_api_key: ${{secrets.HEROKU_API_TOKEN_FOR_BOT_USER}}
heroku_app_name: "" # this has to be blank or it doesn't work
heroku_email: ${{secrets.HEROKU_EMAIL_FOR_BOT_USER}}
justlogin: true
- run: heroku auth:whoami
# Install the heroku-repo plugin in the Heroku CLI
- run: heroku plugins:install heroku-repo
# Set the Node.js version
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@5e21ff4d9bc1a8cf6de233a3057d20ec6b3fb69d # v3.8.1
with:
node-version: ${{ matrix.node-version }}
# Now start building!
# > …but first, get a little crazy for a sec and delete the top-level package.json file
# > i.e. the one used by the Fleet server. This is because require() in node will go
# > hunting in ancestral directories for missing dependencies, and since some of the
# > bundled transpiler tasks sniff for package availability using require(), this trips
# > up when it encounters another Node universe in the parent directory.
- run: rm -rf package.json package-lock.json node_modules/
# > Turns out there's a similar issue with how eslint plugins are looked up, so we
# > delete the top level .eslintrc file too.
- run: rm -f .eslintrc.js
# > And, as a change to the top-level fleetdm/fleet .gitignore on May 2, 2022 revealed,
# > we also need to delete the top level .gitignore file too, so that its rules don't
# > interfere with the committing and force-pushing we're doing as part of our deploy
# > script here. For more info, see: https://github.com/fleetdm/fleet/pull/5549
- run: rm -f .gitignore
# Get dependencies (including dev deps)
- run: cd ee/fleet-agent-downloader/ && npm install
# Run sanity checks
- run: cd ee/fleet-agent-downloader/ && npm test
# Compile assets
- run: cd ee/fleet-agent-downloader/ && npm run build-for-prod
# Commit newly-built assets locally so we can push them to Heroku below.
# (This commit will never be pushed to GitHub- only to Heroku.)
# > The local config flags make this work in GitHub's environment.
- run: git add ee/fleet-agent-downloader/.www
# Configure the Heroku app we'll be deploying to
- run: heroku git:remote -a fleet-agent-downloader
- run: git remote -v
# Deploy to Heroku (by pushing)
# > Since a shallow clone was grabbed, we have to "unshallow" it before forcepushing.
- run: echo "Unshallowing local repository…"
- run: git fetch --prune --unshallow
# Deploy to Heroku
- run: echo "Deploying branch '${GITHUB_REF##*/}' to Heroku…"
- name: Deploy to Heroku
run: |
set -euo pipefail
git add -A
# Create a git tree object from the currently staged repository state for this Heroku deploy.
TREE=$(git write-tree)
# Create a parentless commit from the tree object.
COMMIT=$(git -c "user.name=Fleetwood" -c "user.email=github@example.com" \
commit-tree "$TREE" \
-m 'AUTOMATED COMMIT - Deploy Fleet agent downloader app with the latest staged changes, including generated production assets.')
# Push the parentless commit to Heroku
# Note: The commit pushed to Heroku will not contain the full git history.
# This lets up deploy this app from the Fleet monorepo while working around Heroku's pack size limits.
git push heroku "$COMMIT":refs/heads/master --force
- name: 🌐 Fleet agent downloader has been deployed
run: echo '' && echo '--' && echo 'OK, done. It should be live momentarily.' && echo '(if you get impatient, check the Heroku dashboard for status)'

View file

@ -63,7 +63,7 @@ jobs:
# Install the right version of Go for the Golang child process that we are currently using for CSR signing
- name: Set up Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'

View file

@ -0,0 +1,86 @@
name: Docker cleanup (branch deletion)
on:
delete:
permissions:
contents: read
jobs:
cleanup:
# Only run for branch deletions (not tag deletions) in the fleetdm/fleet repo.
if: ${{ github.event.ref_type == 'branch' && github.repository == 'fleetdm/fleet' }}
runs-on: ubuntu-latest
environment: Docker Hub
steps:
- name: Sanitize branch name
id: sanitize
env:
BRANCH: ${{ github.event.ref }}
run: |
SANITIZED="${BRANCH//\//-}"
echo "TAG=$SANITIZED" >> $GITHUB_OUTPUT
- name: Skip protected branches
id: check_protected
env:
TAG: ${{ steps.sanitize.outputs.TAG }}
run: |
if [[ "$TAG" == "main" || "$TAG" == rc-minor-* || "$TAG" == rc-patch-* ]]; then
echo "skip=true" >> $GITHUB_OUTPUT
echo "Skipping cleanup for protected branch tag: $TAG"
else
echo "skip=false" >> $GITHUB_OUTPUT
fi
- name: Delete tag from Docker Hub
if: steps.check_protected.outputs.skip == 'false'
env:
TAG: ${{ steps.sanitize.outputs.TAG }}
DOCKERHUB_USERNAME: ${{ secrets.DOCKERHUB_USERNAME }}
DOCKERHUB_ACCESS_TOKEN: ${{ secrets.DOCKERHUB_ACCESS_TOKEN }}
run: |
# Authenticate and get JWT
TOKEN=$(curl -s -X POST "https://hub.docker.com/v2/users/login/" \
-H "Content-Type: application/json" \
-d "{\"username\": \"$DOCKERHUB_USERNAME\", \"password\": \"$DOCKERHUB_ACCESS_TOKEN\"}" \
| jq -r .token)
# Bail if the token is empty (authentication failed)
if [[ -z "$TOKEN" ]]; then
echo "Failed to authenticate with Docker Hub. Check credentials."
exit 1
fi
# Delete the tag (ignore 404 — tag may not exist)
HTTP_STATUS=$(curl -s -o /dev/null -w "%{http_code}" -X DELETE \
"https://hub.docker.com/v2/repositories/fleetdm/fleet/tags/${TAG}/" \
-H "Authorization: Bearer $TOKEN")
if [[ "$HTTP_STATUS" == "204" ]]; then
echo "Deleted Docker Hub tag: $TAG"
elif [[ "$HTTP_STATUS" == "404" ]]; then
echo "Docker Hub tag not found (already deleted or never published): $TAG"
else
echo "Unexpected response from Docker Hub: HTTP $HTTP_STATUS"
exit 1
fi
- name: Delete tag from Quay.io
if: steps.check_protected.outputs.skip == 'false'
env:
TAG: ${{ steps.sanitize.outputs.TAG }}
QUAY_REGISTRY_PASSWORD: ${{ secrets.QUAY_REGISTRY_PASSWORD }}
run: |
HTTP_STATUS=$(curl -s -o /dev/null -w "%{http_code}" -X DELETE \
"https://quay.io/api/v1/repository/fleetdm/fleet/tag/${TAG}" \
-H "Authorization: Bearer $QUAY_REGISTRY_PASSWORD")
if [[ "$HTTP_STATUS" == "204" || "$HTTP_STATUS" == "200" ]]; then
echo "Deleted Quay.io tag: $TAG"
elif [[ "$HTTP_STATUS" == "404" ]]; then
echo "Quay.io tag not found (already deleted or never published): $TAG"
else
echo "Unexpected response from Quay.io: HTTP $HTTP_STATUS"
exit 1
fi

View file

@ -69,7 +69,7 @@ jobs:
aws-region: ${{ env.AWS_REGION }}
- name: Set up Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'

View file

@ -30,7 +30,7 @@ permissions:
jobs:
fleet-gitops:
timeout-minutes: 10
timeout-minutes: 30
runs-on: ubuntu-latest
steps:
- name: Harden Runner
@ -82,8 +82,12 @@ jobs:
DOGFOOD_END_USER_SSO_METADATA: ${{ secrets.DOGFOOD_END_USER_SSO_METADATA }}
DOGFOOD_TESTING_AND_QA_ENROLL_SECRET: ${{ secrets.DOGFOOD_TESTING_AND_QA_ENROLL_SECRET }}
DOGFOOD_OKTA_CA_CERTIFICATE: ${{ secrets.DOGFOOD_OKTA_CA_CERTIFICATE }}
DOGFOOD_OKTA_ANDROID_MANAGEMENT_HINT: ${{ secrets.DOGFOOD_OKTA_ANDROID_MANAGEMENT_HINT }}
DOGFOOD_OKTA_IOS_MANAGEMENT_HINT: ${{ secrets.DOGFOOD_OKTA_IOS_MANAGEMENT_HINT }}
DOGFOOD_OKTA_VERIFY_WINDOWS_URL: ${{ secrets.DOGFOOD_OKTA_VERIFY_WINDOWS_URL }}
DOGFOOD_ENTRA_TENANT_ID: ${{ secrets.DOGFOOD_ENTRA_TENANT_ID }}
DOGFOOD_OKTA_METADATA_URL_ADMINS: ${{ secrets.DOGFOOD_OKTA_METADATA_URL_ADMINS }}
DOGFOOD_OKTA_METADATA_URL_END_USERS: ${{ secrets.DOGFOOD_OKTA_METADATA_URL_END_USERS }}
- name: Notify on Gitops failure
if: failure() && github.ref_name == 'main'

583
.github/workflows/e2e-agent.yml vendored Normal file
View file

@ -0,0 +1,583 @@
# This workflow tests enrolling of agents on the supported platforms.
#
# It starts the latest release of fleet with the "fleetctl preview" command.
# It generates the installers for the latest version of fleetd with the
# "fleetctl package" command.
#
# It tests across a matrix of configurations:
# OS: mac/Linux/Windows
# Updates: enabled/disabled
# Channels (for each of orbit/osquery\desktop): edge/stable
# Arch: arm/x86
#
# Troubleshooting
# The top two errors seen while developing this:
# 1) Jobs are queued waiting for runners long enough for the entire workflow to fail. Scheduling for the middle of the night attempts to mitigate this. Timeouts have been tuned to try to manage it as well.
# 2) Network issues (commonly related to Cloudflare tunnels) cause some request to fail.
#
# Upon failure, the workflow will automatically retry up to 3 times. Notifications are sent to Slack upon failure, and also after the failure has been resolved. After 4 failures, a stronger message will be logged to Slack.
name: E2E Test Agents
on:
workflow_dispatch: # Manual
inputs:
retry:
description: 'Number of retries attempted so far'
type: number
default: 0
schedule:
- cron: '0 5 * * *' # Nightly 5AM UTC
pull_request:
paths:
- '.github/workflows/e2e-agent.yml'
# Each cron schedule gets its own concurrency group. workflow_dispatch and pull_request also get their own.
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id}}-${{ github.event.schedule || github.event_name }}
cancel-in-progress: true
defaults:
run:
# fail-fast using bash -eo pipefail. See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#exit-codes-and-error-action-preference
shell: bash
jobs:
# Generate a random UUID to be used for the Cloudflare tunnel subdomain and make it available to later jobs.
gen:
runs-on: ubuntu-latest
outputs:
subdomain: ${{ steps.gen.outputs.subdomain }}
address: ${{ steps.gen.outputs.address }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
- id: gen
run: |
UUID=$(uuidgen)
echo "subdomain=fleet-test-$UUID" >> $GITHUB_OUTPUT
echo "address=https://fleet-test-$UUID.fleetuem.com" >> $GITHUB_OUTPUT
run-server:
timeout-minutes: 240
runs-on: ubuntu-latest
needs: gen
steps:
- name: Harden Runner
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
- name: Start tunnel
env:
CERT_PEM: ${{ secrets.CLOUDFLARE_TUNNEL_FLEETUEM_CERT_B64 }}
run: |
# Increase maximum receive buffer size to roughly 2.5 MB.
# Cloudflared uses quic-go. This buffer holds packets that have been received by the kernel,
# but not yet read by the application (quic-go in this case). Once this buffer fills up, the
# kernel will drop any new incoming packet.
# See https://github.com/quic-go/quic-go/wiki/UDP-Receive-Buffer-Size.
sudo sysctl -w net.core.rmem_max=2500000
# Install cloudflared and run tunnel
wget https://github.com/cloudflare/cloudflared/releases/download/2026.3.0/cloudflared-linux-amd64.deb
sudo dpkg -i cloudflared-linux-amd64.deb
echo "$CERT_PEM" | base64 -d > cert.pem
cloudflared tunnel --origincert cert.pem create ${{ needs.gen.outputs.subdomain }}
cloudflared tunnel --origincert cert.pem route dns ${{ needs.gen.outputs.subdomain }} ${{ needs.gen.outputs.subdomain }}
cloudflared tunnel --origincert cert.pem --url http://localhost:1337 --logfile cloudflared.log run ${{ needs.gen.outputs.subdomain }} &
until [[ $(cloudflared tunnel --origincert cert.pem info -o json ${{ needs.gen.outputs.subdomain }} | jq '.conns[0].conns[0].is_pending_reconnect') = false ]]; do
echo "Awaiting tunnel ready..."
sleep 1
done
- name: Run Fleet server
run: |
npm install -g fleetctl
fleetctl preview --no-hosts --disable-open-browser
fleetctl config set --address ${{ needs.gen.outputs.address }}
fleetctl get enroll-secret
docker compose -f ~/.fleet/preview/docker-compose.yml logs --follow fleet01 fleet02 &
# Ensure Fleet server is responding before waiting for enrollments
echo "Checking Fleet server health..."
HEALTH_CHECK_COUNT=0
until HTTP_CODE=$(curl -sS -o /dev/null -w "%{http_code}" http://localhost:1337/healthz) && [[ "$HTTP_CODE" == "200" ]]; do
HEALTH_CHECK_COUNT=$((HEALTH_CHECK_COUNT + 1))
if [ $HEALTH_CHECK_COUNT -ge 30 ]; then
echo "ERROR: Fleet server not responding after 150 seconds"
docker ps -a --filter "name=fleet"
exit 1
fi
echo "Health check ${HEALTH_CHECK_COUNT}/30 (HTTP status: ${HTTP_CODE:-connection failed})"
sleep 5
done
echo "Fleet server is responding"
# Wait for all hosts to enroll, then keep the server alive until the summary job completes.
EXPECTED=96 # This needs to be updated when the matrix strategies are updated.
START=$(date +%s)
while true; do
ELAPSED=$(( $(date +%s) - START ))
# Check and display enrollment status
fleetctl get hosts || true
HOST_COUNT=$(fleetctl get hosts --json | (grep -v "No hosts found" || true) | wc -l | tr -d ' ')
echo "Hosts enrolled: ${HOST_COUNT} / $EXPECTED (${ELAPSED}s)"
# Check summary job status
JOBS_JSON=$(gh api "/repos/${{ github.repository }}/actions/runs/${{ github.run_id }}/jobs?per_page=100")
SUMMARY_STATUS=$(echo "$JOBS_JSON" | jq -r '[.jobs[] | select(.name == "summary")] | if length > 0 then .[0].status else "not_started" end')
echo "Summary job status: $SUMMARY_STATUS"
if [ "$SUMMARY_STATUS" = "completed" ]; then
echo "Summary job completed, exiting."
break
fi
sleep 10
done
env:
GH_TOKEN: ${{ github.token }}
- name: Show enrolled hosts
if: always()
run: |
fleetctl get hosts
fleetctl get hosts --json | jq
- name: Cleanup tunnel
if: always()
run: cloudflared tunnel --origincert cert.pem delete --force ${{ needs.gen.outputs.subdomain }} || true
- name: Print cloudflared logs
if: always()
run: cat cloudflared.log || true
- name: Cancel workflow if run-server fails
if: failure()
run: gh run cancel ${{ github.run_id }} --repo fleetdm/fleet
env:
GH_TOKEN: ${{ secrets.FLEET_RELEASE_GITHUB_PAT }}
login:
timeout-minutes: 15
runs-on: ubuntu-latest
needs: gen
outputs:
token: ${{ steps.login.outputs.token }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
# Login only here and share the token because otherwise we could hit rate limits.
- name: Set Cloudflare DNS
run: |
# Use Cloudflare's DNS resolver (1.1.1.1) since the tunnel DNS record is managed
# by Cloudflare. Apply to all non-loopback interfaces in case traffic routes
# through one other than eth0.
for iface in $(ip -o link show | awk -F': ' '{print $2}' | grep -v lo); do
sudo resolvectl dns "$iface" 1.1.1.1 2>/dev/null || true
done
- id: login
name: Attempt login
run: |
npm install -g fleetctl
fleetctl config set --address ${{ needs.gen.outputs.address }}
# Wait for DNS to propagate by querying Cloudflare's DoH endpoint over HTTPS.
# This avoids relying on UDP/53 to 1.1.1.1, which may be blocked on runners.
HOSTNAME=$(echo "${{ needs.gen.outputs.address }}" | sed 's|https://||')
echo "Waiting for DNS propagation..."
DNS_START=$(date +%s)
until curl -sf "https://1.1.1.1/dns-query?name=${HOSTNAME}&type=A" \
-H 'accept: application/dns-json' | jq -e '.Status == 0' > /dev/null; do
ELAPSED=$(( $(date +%s) - DNS_START ))
echo "DNS not yet propagated... (${ELAPSED}s)"
sleep 2
done
echo "DNS propagated."
# Wait for Fleet server to be reachable
echo "Waiting for Fleet server to pass health check..."
HEALTH_CHECK_START=$(date +%s)
until curl -s -o /dev/null -w "%{http_code}" ${{ needs.gen.outputs.address }}/healthz | grep -q "200"; do
ELAPSED=$(( $(date +%s) - HEALTH_CHECK_START ))
echo "Health check failed... (${ELAPSED}s)"
sleep 1
done
echo "Fleet server is responding, attempting login..."
LOGIN_START=$(date +%s)
until fleetctl login --email admin@example.com --password preview1337#; do
ELAPSED=$(( $(date +%s) - LOGIN_START ))
echo "Login attempt failed... (${ELAPSED}s)"
sleep 1
done
TOKEN=$(fleetctl config get token | awk '{print $3}')
echo "token=$TOKEN" >> $GITHUB_OUTPUT
fleetd-macos:
timeout-minutes: 10
strategy:
matrix:
runner: [ 'macos-15', 'macos-15-intel' ]
orbit-channel: [ 'stable', 'edge' ]
osqueryd-channel: [ 'stable', 'edge' ]
desktop-channel: [ 'stable', 'edge' ]
disable-updates: [ true, false ]
runs-on: ${{ matrix.runner }}
needs: [gen, login]
steps:
- name: Install fleetctl
run: |
npm install -g fleetctl
fleetctl config set --address ${{ needs.gen.outputs.address }} --token ${{ needs.login.outputs.token }}
- name: Set Cloudflare DNS
run: |
# Use Cloudflare's DNS resolver (1.1.1.1) since the tunnel DNS record is managed
# by Cloudflare — their resolver sees the new record immediately.
for svc in $(networksetup -listallnetworkservices | tail -n +2); do
sudo networksetup -setdnsservers "$svc" 1.1.1.1 2>/dev/null || true
done
sudo dscacheutil -flushcache
sudo killall -HUP mDNSResponder || true
- name: Install fleetd
run: |
ARCH=$(uname -m)
sudo hostname macos-${ARCH}-${{ matrix.orbit-channel }}-${{ matrix.osqueryd-channel }}-${{ matrix.desktop-channel }}-${{ matrix.disable-updates }}
SECRET_JSON=$(fleetctl get enroll_secret --json --debug)
echo $SECRET_JSON
SECRET=$(echo $SECRET_JSON | jq -r '.spec.secrets[0].secret')
echo "Secret: $SECRET"
echo "Hostname: $(hostname -s)"
# Instance identifier is needed because macOS runners share UUIDs
fleetctl package --type pkg --fleet-url=${{ needs.gen.outputs.address }} --enroll-secret=$SECRET --orbit-channel=${{ matrix.orbit-channel }} --osqueryd-channel=${{ matrix.osqueryd-channel }} --desktop-channel=${{ matrix.desktop-channel }} --fleet-desktop --debug --host-identifier=instance --disable-updates=${{ matrix.disable-updates }}
sudo installer -pkg fleet-osquery.pkg -target /
ENROLLMENT_START=$(date +%s)
until fleetctl get hosts | grep -iF $(hostname -s);
do
CURRENT_TIME=$(date +%s)
ELAPSED=$((CURRENT_TIME - ENROLLMENT_START))
echo "Awaiting enrollment... (${ELAPSED}s)"
sleep 1
done
- name: Check processes
run: |
sleep 30
sudo tail -60 /var/log/orbit/orbit.stderr.log
echo "Checking if osqueryd is running..."
pgrep -x osqueryd || (echo "ERROR: osqueryd is not running" && exit 1)
echo "Checking if orbit is running..."
pgrep -x orbit || (echo "ERROR: orbit is not running" && exit 1)
echo "Checking if fleet-desktop is running..."
pgrep -x fleet-desktop || (echo "ERROR: fleet-desktop is not running" && exit 1)
echo "All processes are running."
- name: Print orbit logs
if: always()
run: |
sudo cat /var/log/orbit/*
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
sparse-checkout: |
it-and-security/lib/macos/scripts/uninstall-fleetd-macos.sh
- name: Uninstall Orbit
run: |
sudo ./it-and-security/lib/macos/scripts/uninstall-fleetd-macos.sh
fleetd-ubuntu:
timeout-minutes: 10
strategy:
matrix:
runner: [ 'ubuntu-24.04', 'ubuntu-24.04-arm' ]
orbit-channel: [ 'stable', 'edge' ]
osqueryd-channel: [ 'stable', 'edge' ]
desktop-channel: [ 'stable', 'edge' ]
disable-updates: [ true, false ]
runs-on: ${{ matrix.runner }}
needs: [gen, login]
steps:
- name: Harden Runner
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
- name: Install fleetctl
run: |
npm install -g fleetctl
fleetctl config set --address ${{ needs.gen.outputs.address }} --token ${{ needs.login.outputs.token }}
- name: Set Cloudflare DNS
run: |
# Use Cloudflare's DNS resolver (1.1.1.1) since the tunnel DNS record is managed
# by Cloudflare. Apply to all non-loopback interfaces in case traffic routes
# through one other than eth0.
for iface in $(ip -o link show | awk -F': ' '{print $2}' | grep -v lo); do
sudo resolvectl dns "$iface" 1.1.1.1 2>/dev/null || true
done
- name: Install Orbit
run: |
ARCH=$(uname -m)
if [ "$ARCH" = "x86_64" ]; then FLEET_ARCH="amd64"; else FLEET_ARCH="arm64"; fi
sudo hostnamectl set-hostname ubuntu-${ARCH}-${{ matrix.orbit-channel }}-${{ matrix.osqueryd-channel }}-${{ matrix.desktop-channel }}-${{ matrix.disable-updates }}
SECRET_JSON=$(fleetctl get enroll_secret --json --debug)
echo $SECRET_JSON
SECRET=$(echo $SECRET_JSON | jq -r '.spec.secrets[0].secret')
echo "Secret: $SECRET"
echo "Hostname: $(hostname -s)"
fleetctl package --type deb --fleet-url=${{ needs.gen.outputs.address }} --enroll-secret=$SECRET --orbit-channel=${{ matrix.orbit-channel }} --osqueryd-channel=${{ matrix.osqueryd-channel }} --desktop-channel=${{ matrix.desktop-channel }} --fleet-desktop --debug --arch=$FLEET_ARCH --disable-updates=${{ matrix.disable-updates }}
sudo dpkg -i fleet-osquery*
ENROLLMENT_START=$(date +%s)
until fleetctl get hosts | grep -iF $(hostname -s); do
CURRENT_TIME=$(date +%s)
ELAPSED=$((CURRENT_TIME - ENROLLMENT_START))
echo "Waiting for enrollment... (${ELAPSED}s)"
sudo systemctl status orbit.service || true
sleep 1
done
- name: Check processes
run: |
sudo systemctl status orbit.service
sleep 30
sudo systemctl status orbit.service
echo "Checking if osqueryd is running..."
pgrep -x osqueryd || (echo "ERROR: osqueryd is not running" && exit 1)
echo "Checking if orbit is running..."
pgrep -x orbit || (echo "ERROR: orbit is not running" && exit 1)
# Don't check for Fleet Desktop as it doesn't run in the windowless CI environment.
echo "All processes are running."
- name: Print orbit logs
if: always()
run: |
sudo journalctl -u orbit.service --no-pager
- name: Uninstall Orbit
run: |
sudo apt remove fleet-osquery -y
fleetd-windows:
timeout-minutes: 10
strategy:
matrix:
runner: [ 'windows-2025', 'windows-11-arm' ]
orbit-channel: [ 'stable', 'edge' ]
osqueryd-channel: [ 'stable', 'edge' ]
desktop-channel: [ 'stable', 'edge' ]
disable-updates: [ true, false ]
needs: [gen, login]
runs-on: ${{ matrix.runner }}
steps:
# We need to use some shenanigans to rename the Windows computer without restarting. Note: Windows computers should not get names longer than 15 characters (confirmed this breaks networking).
- name: Rename computer
shell: powershell
run: |
$orbit = "${{ matrix.orbit-channel }}"
$osqueryd = "${{ matrix.osqueryd-channel }}"
$desktop = "${{ matrix.desktop-channel }}"
$arch = if ($env:PROCESSOR_ARCHITECTURE -eq 'ARM64') { 'a' } else { 'x' }
$disableUpdates = if ("${{ matrix.disable-updates }}" -eq "true") { "t" } else { "f" }
$ComputerName = "win-$arch-$($orbit[0])-$($osqueryd[0])-$($desktop[0])-$disableUpdates"
echo "Setting computer name to $ComputerName"
Remove-ItemProperty -path "HKLM:\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters" -name "Hostname"
Remove-ItemProperty -path "HKLM:\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters" -name "NV Hostname"
Set-ItemProperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\Computername\Computername" -name "Computername" -value $ComputerName
Set-ItemProperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\Computername\ActiveComputername" -name "Computername" -value $ComputerName
Set-ItemProperty -path "HKLM:\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters" -name "Hostname" -value $ComputerName
Set-ItemProperty -path "HKLM:\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters" -name "NV Hostname" -value $ComputerName
Set-ItemProperty -path "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" -name "AltDefaultDomainName" -value $ComputerName
Set-ItemProperty -path "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" -name "DefaultDomainName" -value $ComputerName
- name: Set Cloudflare DNS
shell: powershell
run: |
# Use Cloudflare's DNS resolver (1.1.1.1) since the tunnel DNS record is managed
# by Cloudflare — their resolver sees the new record immediately.
# -ErrorAction SilentlyContinue skips adapters (e.g. Hyper-V virtual/internal)
# that have no associated DNS client address object.
Get-NetAdapter | ForEach-Object { Set-DnsClientServerAddress -InterfaceIndex $_.InterfaceIndex -ServerAddresses "1.1.1.1" -ErrorAction SilentlyContinue }
Clear-DnsClientCache
- name: Install fleetctl
shell: bash
# On Windows we need to set rootca or tls-skip verify. Since this is a test environment we can skip TLS verification.
run: |
npm install -g fleetctl
fleetctl config set --address ${{ needs.gen.outputs.address }} --token ${{ needs.login.outputs.token }} --tls-skip-verify
- name: Install WiX toolset (arm runner only)
if: matrix.runner == 'windows-11-arm'
shell: powershell
run: |
Invoke-WebRequest -Uri "https://github.com/wixtoolset/wix3/releases/download/wix3141rtm/wix314.exe" -OutFile wix314.exe
Start-Process -Wait -FilePath .\wix314.exe -ArgumentList "/quiet"
"WIX=C:\Program Files (x86)\WiX Toolset v3.14\" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
- name: Build MSI
shell: bash
run: |
SECRET_JSON=$(fleetctl get enroll_secret --json --debug)
echo "$SECRET_JSON"
# Strip any prefix before the JSON (e.g. "Installing fleetctl... Install completed. ") in case
# fleetctl auto-updates and writes an install message to stdout on the same line as the JSON.
SECRET=$(echo "$SECRET_JSON" | sed 's/^[^{]*//' | jq -r '.spec.secrets[0].secret')
echo "Secret: $SECRET"
ARCH=$(echo "$PROCESSOR_ARCHITECTURE" | tr '[:upper:]' '[:lower:]')
# WIX env var points to the WiX Toolset install dir (pre-installed on windows-11 runner, installed above on windows-11-arm)
fleetctl package --type msi --fleet-url=${{ needs.gen.outputs.address }} --enroll-secret=$SECRET --orbit-channel=${{ matrix.orbit-channel }} --osqueryd-channel=${{ matrix.osqueryd-channel }} --desktop-channel=${{ matrix.desktop-channel }} --fleet-desktop --debug --local-wix-dir="${WIX}bin" --arch=$ARCH --disable-updates=${{ matrix.disable-updates }}
- name: Install Orbit
shell: cmd
run: |
msiexec /i fleet-osquery.msi /quiet /passive /lv log.txt
- name: Wait for enrollment
shell: powershell
run: |
$orbit = "${{ matrix.orbit-channel }}"
$osqueryd = "${{ matrix.osqueryd-channel }}"
$desktop = "${{ matrix.desktop-channel }}"
$arch = if ($env:PROCESSOR_ARCHITECTURE -eq 'ARM64') { 'a' } else { 'x' }
$disableUpdates = if ("${{ matrix.disable-updates }}" -eq "true") { "t" } else { "f" }
$ComputerName = "win-$arch-$($orbit[0])-$($osqueryd[0])-$($desktop[0])-$disableUpdates"
$StartTime = Get-Date
do {
$hosts = fleetctl get hosts
if ($hosts -match $ComputerName) {
Write-Host "Success! $ComputerName enrolled."
break
}
$Elapsed = [math]::Round(((Get-Date) - $StartTime).TotalSeconds)
Write-Host "Waiting for enrollment... (${Elapsed}s)"
Start-Sleep -Seconds 1
} while ($true)
- name: Check processes
shell: powershell
run: |
Start-Sleep -Seconds 30
Write-Host "Checking if osqueryd is running..."
if (-not (Get-Process -Name "osqueryd" -ErrorAction SilentlyContinue)) {
Write-Host "ERROR: osqueryd is not running"
exit 1
}
Write-Host "Checking if orbit is running..."
if (-not (Get-Process -Name "orbit" -ErrorAction SilentlyContinue)) {
Write-Host "ERROR: orbit is not running"
exit 1
}
Write-Host "Checking if fleet-desktop is running..."
if (-not (Get-Process -Name "fleet-desktop" -ErrorAction SilentlyContinue)) {
Write-Host "ERROR: fleet-desktop is not running"
exit 1
}
Write-Host "All processes are running."
- name: Print orbit install log
if: always()
shell: powershell
run: Get-Content log.txt -ErrorAction SilentlyContinue
- name: Print Orbit logs
if: always()
shell: powershell
run: Get-Content "C:\Windows\system32\config\systemprofile\AppData\Local\FleetDM\Orbit\Logs\orbit-osquery.log" -ErrorAction SilentlyContinue
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
sparse-checkout: |
it-and-security/lib/windows/scripts/uninstall-fleetd-windows.ps1
- name: Uninstall Orbit
shell: powershell
run: |
.\it-and-security\lib\windows\scripts\uninstall-fleetd-windows.ps1
summary:
needs: [fleetd-macos, fleetd-ubuntu, fleetd-windows]
runs-on: ubuntu-latest
if: always()
steps:
- name: Compute next retry
id: next-retry
run: echo "value=$(( ${{ inputs.retry || 0 }} + 1 ))" >> $GITHUB_OUTPUT
- name: Slack Notification (failure with retries remaining)
if: (github.event_name == 'schedule' || (github.event_name == 'workflow_dispatch' && (inputs.retry || 0) > 0)) && (contains(needs.*.result, 'failure') || contains(needs.*.result, 'cancelled')) && (inputs.retry || 0) < 3
uses: slackapi/slack-github-action@af78098f536edbc4de71162a307590698245be95 # v3.0.1
with:
webhook: ${{ secrets.SLACK_G_HELP_ENGINEERING_WEBHOOK_URL }}
webhook-type: incoming-webhook
payload: |
blocks:
- type: "section"
text:
type: "mrkdwn"
text: "*Agent E2E test FAILED* (attempt ${{ steps.next-retry.outputs.value }}/4, retrying...)\n${{ github.event.pull_request.html_url || github.event.head_commit.url }}\n<${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}|View workflow run>\nThis may not need investigation if it self-resolves on the retry. Look for the next notification of success/failure."
- name: Slack Notification (all retries exhausted)
if: (contains(needs.*.result, 'failure') || contains(needs.*.result, 'cancelled')) && (inputs.retry || 0) >= 3
uses: slackapi/slack-github-action@af78098f536edbc4de71162a307590698245be95 # v3.0.1
with:
webhook: ${{ secrets.SLACK_G_HELP_ENGINEERING_WEBHOOK_URL }}
webhook-type: incoming-webhook
payload: |
blocks:
- type: "header"
text:
type: "plain_text"
text: "🚨 ALL RETRIES EXHAUSTED — MANUAL INVESTIGATION REQUIRED 🚨"
- type: "section"
text:
type: "mrkdwn"
text: "*Agent E2E test FAILED all 4 attempts* :rotating_light:\n${{ github.event.pull_request.html_url || github.event.head_commit.url }}\n<${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}|View workflow run>"
- name: Slack Notification (retry success)
if: ${{!contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled') && (inputs.retry || 0) > 0 }}
uses: slackapi/slack-github-action@af78098f536edbc4de71162a307590698245be95 # v3.0.1
with:
webhook: ${{ secrets.SLACK_G_HELP_ENGINEERING_WEBHOOK_URL }}
webhook-type: incoming-webhook
payload: |
blocks:
- type: "section"
text:
type: "mrkdwn"
text: "*Agent E2E test PASSED after ${{ inputs.retry }} retr${{ inputs.retry == 1 && 'y' || 'ies' }}* :white_check_mark:\n${{ github.event.pull_request.html_url || github.event.head_commit.url }}\n<${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}|View workflow run>\nThe above failure appears to have been transient. No investigation needed unless you see a pattern of repeated failures."
- name: Retry workflow on failure
# Only retry scheduled runs or manual runs that are retries for scheduled runs (inputs.retry > 0)
if: ${{ (github.event_name == 'schedule' || (github.event_name == 'workflow_dispatch' && (inputs.retry || 0) > 0)) && (contains(needs.*.result, 'failure') || contains(needs.*.result, 'cancelled')) && (inputs.retry || 0) < 3 }}
run: |
gh workflow run e2e-agent.yml --repo ${{ github.repository }} --ref ${{ github.head_ref || github.ref_name }} -f retry=${{ steps.next-retry.outputs.value }}
env:
GH_TOKEN: ${{ secrets.FLEET_RELEASE_GITHUB_PAT }}
- name: Cancel workflow if any job failed
if: ${{ contains(needs.*.result, 'failure') || contains(needs.*.result, 'cancelled') }}
run: gh run cancel ${{ github.run_id }} --repo fleetdm/fleet
env:
GH_TOKEN: ${{ secrets.FLEET_RELEASE_GITHUB_PAT }}

View file

@ -77,7 +77,7 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'
@ -191,7 +191,7 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'
@ -231,7 +231,7 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'
@ -274,7 +274,7 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'

View file

@ -1,7 +1,8 @@
name: Test latest changes in fleetctl preview
# Tests the `fleetctl preview` command with latest changes in fleetctl and
# docs/01-Using-Fleet/starter-library/starter-library.yml
# Tests the `fleetctl preview` command with the Fleet server and fleetctl
# built from the same commit, ensuring the starter library and GitOps
# pipeline work end-to-end.
on:
push:
@ -16,7 +17,6 @@ on:
- 'server/context/**.go'
- 'orbit/**.go'
- 'ee/fleetctl/**.go'
- 'docs/01-Using-Fleet/starter-library/starter-library.yml'
- '.github/workflows/fleetctl-preview-latest.yml'
- 'tools/osquery/in-a-box'
pull_request:
@ -27,7 +27,6 @@ on:
- 'server/context/**.go'
- 'orbit/**.go'
- 'ee/fleetctl/**.go'
- 'docs/01-Using-Fleet/starter-library/starter-library.yml'
- '.github/workflows/fleetctl-preview-latest.yml'
- 'tools/osquery/in-a-box'
workflow_dispatch: # Manual
@ -67,19 +66,47 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'
- name: Set up Node.js
uses: actions/setup-node@5e21ff4d9bc1a8cf6de233a3057d20ec6b3fb69d # v3.8.1
with:
node-version-file: package.json
check-latest: true
- name: Install JS dependencies
run: make deps
- name: Generate assets
run: make generate
- name: Build Fleetctl
run: make fleetctl
- name: Build Fleet server Docker image
run: |
make fleet-static
cp ./build/fleet fleet
docker build -t fleetdm/fleet:dev -f tools/fleet-docker/Dockerfile .
rm fleet
- name: Prepare preview config
run: |
# Copy the in-a-box config and set pull_policy so Docker uses the
# locally built image instead of trying to pull from Docker Hub.
cp -a tools/osquery/in-a-box /tmp/preview-config
# Add pull_policy: never to fleet01 and fleet02 services
sed -i '/^ fleet01:/,/^ [^ ]/{s/^\( image: fleetdm\/fleet.*\)/\1\n pull_policy: never/}' /tmp/preview-config/docker-compose.yml
sed -i '/^ fleet02:/,/^ [^ ]/{s/^\( image: fleetdm\/fleet.*\)/\1\n pull_policy: never/}' /tmp/preview-config/docker-compose.yml
- name: Run fleetctl preview
run: |
./build/fleetctl preview \
--tag dev \
--disable-open-browser \
--starter-library-file-path $(pwd)/docs/01-Using-Fleet/starter-library/starter-library.yml \
--preview-config-path ./tools/osquery/in-a-box
--preview-config-path /tmp/preview-config
sleep 10
./build/fleetctl get hosts | tee hosts.txt
[ $( cat hosts.txt | grep online | wc -l) -eq 8 ]

View file

@ -29,6 +29,8 @@ jobs:
egress-policy: audit
- name: Test fleetctl preview
env:
GITHUB_TOKEN: ${{ secrets.FLEET_RELEASE_GITHUB_PAT }}
run: |
npm install -g fleetctl
fleetctl preview --disable-open-browser

View file

@ -36,7 +36,7 @@ jobs:
fetch-depth: 0
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'

View file

@ -44,7 +44,7 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'
@ -113,7 +113,7 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'
@ -159,7 +159,7 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'
@ -206,7 +206,7 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'
@ -240,7 +240,7 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'

View file

@ -59,7 +59,7 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'
@ -73,7 +73,7 @@ jobs:
run: |
# Don't forget to update
# docs/Contributing/Testing-and-local-development.md when this version changes
go install github.com/golangci/golangci-lint/v2/cmd/golangci-lint@a4b55ebc3471c9fbb763fd56eefede8050f99887 # v2.7.1
go install github.com/golangci/golangci-lint/v2/cmd/golangci-lint@6008b81b81c690c046ffc3fd5bce896da715d5fd # v2.11.3
SKIP_INCREMENTAL=1 make lint-go
- name: Run cloner-check tool
@ -122,7 +122,7 @@ jobs:
fetch-depth: 0 # Fetch full history for accurate diff
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'
@ -136,7 +136,7 @@ jobs:
run: |
# Don't forget to update
# docs/Contributing/Testing-and-local-development.md when this version changes
go install github.com/golangci/golangci-lint/v2/cmd/golangci-lint@a4b55ebc3471c9fbb763fd56eefede8050f99887 # v2.7.1
go install github.com/golangci/golangci-lint/v2/cmd/golangci-lint@6008b81b81c690c046ffc3fd5bce896da715d5fd # v2.11.3
# custom build of golangci-lint that incorporates nilaway - see .custom-gcl.yml
golangci-lint custom
./custom-gcl run -c .golangci-incremental.yml --new-from-rev=origin/${{ github.base_ref }} --timeout 15m ./...

View file

@ -39,13 +39,13 @@ jobs:
fetch-depth: 0 # Needed for goreleaser
- name: Login to Docker Hub
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # v2.1.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_ACCESS_TOKEN }}
- name: Set up Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: "go.mod"
@ -144,7 +144,7 @@ jobs:
echo "The following TAGs are to be pushed: ${{ steps.docker.outputs.TAG }}"
- name: Login to quay.io
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # v2.1.0
with:
registry: quay.io
username: fleetdm+fleetreleaser

View file

@ -52,7 +52,7 @@ jobs:
rm certificate.p12
- name: Set up Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: "go.mod"
@ -100,7 +100,7 @@ jobs:
run: git tag $(echo ${{ github.ref_name }} | sed -e 's/orbit-//g') && git tag -d ${{ github.ref_name }}
- name: Set up Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: "go.mod"
@ -145,7 +145,7 @@ jobs:
run: git tag $(echo ${{ github.ref_name }} | sed -e 's/orbit-//g') && git tag -d ${{ github.ref_name }}
- name: Set up Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: "go.mod"
@ -187,7 +187,7 @@ jobs:
run: git tag $(echo ${{ github.ref_name }} | sed -e 's/orbit-//g') && git tag -d ${{ github.ref_name }}
- name: Set up Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: "go.mod"
@ -241,7 +241,7 @@ jobs:
run: git tag $(echo ${{ github.ref_name }} | sed -e 's/orbit-//g') && git tag -d ${{ github.ref_name }}
- name: Set up Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: "go.mod"

View file

@ -49,13 +49,13 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Login to Docker Hub
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # v2.1.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_ACCESS_TOKEN }}
- name: Set up Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: "go.mod"
@ -69,6 +69,14 @@ jobs:
- name: Install Dependencies
run: make deps
- name: Sanitize branch name for Docker tag
id: sanitize_branch
env:
BRANCH: ${{ github.head_ref || github.ref_name }}
run: |
SANITIZED="${BRANCH//\//-}"
echo "DOCKER_IMAGE_TAG=$SANITIZED" >> $GITHUB_OUTPUT
- name: Compute version from branch
id: compute_version
env:
@ -90,14 +98,7 @@ jobs:
env:
GORELEASER_KEY: ${{ secrets.GORELEASER_KEY }}
FLEET_VERSION: ${{ steps.compute_version.outputs.FLEET_VERSION }}
- name: Tag image with branch name
run: docker tag fleetdm/fleet:$(git rev-parse --short HEAD) fleetdm/fleet:$(git rev-parse --abbrev-ref HEAD)
- name: Generate tag
id: generate_tag
run: |
echo "FLEET_IMAGE_TAG=$(git rev-parse --abbrev-ref HEAD)" >> $GITHUB_OUTPUT
DOCKER_IMAGE_TAG: ${{ steps.sanitize_branch.outputs.DOCKER_IMAGE_TAG }}
- name: List VEX files
id: generate_vex_files
@ -125,7 +126,7 @@ jobs:
--pkg-types=os,library \
--severity=HIGH,CRITICAL \
--vex="${{ steps.generate_vex_files.outputs.VEX_FILES }}" \
fleetdm/fleet:${{ steps.generate_tag.outputs.FLEET_IMAGE_TAG }}
fleetdm/fleet:${{ steps.sanitize_branch.outputs.DOCKER_IMAGE_TAG }}
- name: Check high/critical vulnerabilities before publishing (docker scout)
# Only run this when tagging RCs.
@ -133,7 +134,7 @@ jobs:
uses: docker/scout-action@381b657c498a4d287752e7f2cfb2b41823f566d9 # v1.17.1
with:
command: cves
image: fleetdm/fleet:${{ steps.generate_tag.outputs.FLEET_IMAGE_TAG }}
image: fleetdm/fleet:${{ steps.sanitize_branch.outputs.DOCKER_IMAGE_TAG }}
only-severities: critical,high
only-fixed: true
only-vex-affected: true
@ -145,29 +146,24 @@ jobs:
- name: Publish Docker images
run: docker push fleetdm/fleet --all-tags
- name: Get tags
run: |
echo "TAG=$(git rev-parse --abbrev-ref HEAD) $(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
id: docker
- name: List tags for push
run: |
echo "The following TAGs are to be pushed: ${{ steps.docker.outputs.TAG }}"
echo "The following tag will be pushed: ${{ steps.sanitize_branch.outputs.DOCKER_IMAGE_TAG }}"
- name: Login to quay.io
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # v2.1.0
with:
registry: quay.io
username: fleetdm+fleetreleaser
password: ${{ secrets.QUAY_REGISTRY_PASSWORD }}
- name: Tag and push to quay.io
env:
TAG: ${{ steps.sanitize_branch.outputs.DOCKER_IMAGE_TAG }}
run: |
for TAG in ${{ steps.docker.outputs.TAG }}; do
docker tag fleetdm/fleet:${TAG} quay.io/fleetdm/fleet:${TAG}
for i in {1..5}; do
docker push quay.io/fleetdm/fleet:${TAG} && break || sleep 10
done
docker tag fleetdm/fleet:${TAG} quay.io/fleetdm/fleet:${TAG}
for i in {1..5}; do
docker push quay.io/fleetdm/fleet:${TAG} && break || sleep 10
done
- name: Slack notification

View file

@ -42,7 +42,7 @@ jobs:
path: fleet
- name: Setup Go
uses: actions/setup-go@f111f3307d8850f501ac008e886eec1fd1932a34 # v5.3.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
cache: false
go-version-file: 'fleet/go.mod'

View file

@ -1,483 +0,0 @@
# This workflow tests enrolling of agents on the supported platforms,
# using the latest version of fleet, fleetctl and orbit.
#
# It starts the latest release of fleet with the "fleetctl preview" command.
# It generates the installers for the latest version of Orbit with the
# "fleetctl package" command.
name: Test Fleetctl, Orbit & Preview
on:
workflow_dispatch: # Manual
schedule:
- cron: '0 2 * * *' # Nightly 2AM UTC
# This allows a subsequently queued workflow run to interrupt previous runs
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id}}
cancel-in-progress: true
defaults:
run:
# fail-fast using bash -eo pipefail. See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#exit-codes-and-error-action-preference
shell: bash
permissions:
contents: read
jobs:
gen:
runs-on: ubuntu-latest
outputs:
subdomain: ${{ steps.gen.outputs.subdomain }}
address: ${{ steps.gen.outputs.address }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
- id: gen
run: |
UUID=$(uuidgen)
echo "subdomain=fleet-test-$UUID" >> $GITHUB_OUTPUT
echo "address=https://fleet-test-$UUID.fleetuem.com" >> $GITHUB_OUTPUT
run-server:
runs-on: ubuntu-latest
needs: gen
steps:
- name: Harden Runner
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
- name: Start tunnel
env:
CERT_PEM: ${{ secrets.CLOUDFLARE_TUNNEL_FLEETUEM_CERT_B64 }}
run: |
# Increase maximum receive buffer size to roughly 2.5 MB.
# Cloudflared uses quic-go. This buffer holds packets that have been received by the kernel,
# but not yet read by the application (quic-go in this case). Once this buffer fills up, the
# kernel will drop any new incoming packet.
# See https://github.com/quic-go/quic-go/wiki/UDP-Receive-Buffer-Size.
sudo sysctl -w net.core.rmem_max=2500000
# Install cloudflared
#
# We pin to version 2025.5.0 because something broke with 2025.6.1.
# 2025.6.1 fails with "failed to create tunnel: Unknown output format 'default'"
wget https://github.com/cloudflare/cloudflared/releases/download/2025.5.0/cloudflared-linux-amd64.deb
sudo dpkg -i cloudflared-linux-amd64.deb
# Add secret
echo "$CERT_PEM" | base64 -d > cert.pem
# Start tunnel
cloudflared tunnel --origincert cert.pem --hostname ${{ needs.gen.outputs.subdomain }} --url http://localhost:1337 --name ${{ needs.gen.outputs.subdomain }} --logfile cloudflared.log &
until [[ $(cloudflared tunnel --origincert cert.pem info -o json ${{ needs.gen.outputs.subdomain }} | jq '.conns[0].conns[0].is_pending_reconnect') = false ]]; do
echo "Awaiting tunnel ready..."
sleep 5
done
# Download fleet and fleetctl binaries from last successful build on main
- name: Download binaries
uses: dawidd6/action-download-artifact@5e780fc7bbd0cac69fc73271ed86edf5dcb72d67
with:
workflow: build-binaries.yaml
branch: main
name: build
path: build
check_artifacts: true
- name: Run Fleet server
timeout-minutes: 15
run: |
chmod +x ./build/fleetctl
./build/fleetctl preview --no-hosts --disable-open-browser
./build/fleetctl config set --address ${{ needs.gen.outputs.address }}
./build/fleetctl get enroll-secret
docker compose -f ~/.fleet/preview/docker-compose.yml logs --follow fleet01 fleet02 &
# Ensure Fleet server is responding before waiting for enrollments
echo "Checking Fleet server health..."
HEALTH_CHECK_COUNT=0
until curl -s -o /dev/null -w "%{http_code}" http://localhost:1337/healthz | grep -q "200"; do
HEALTH_CHECK_COUNT=$((HEALTH_CHECK_COUNT + 1))
if [ $HEALTH_CHECK_COUNT -ge 30 ]; then
echo "ERROR: Fleet server not responding after 150 seconds"
docker ps -a --filter "name=fleet"
exit 1
fi
sleep 5
done
echo "Fleet server is responding"
# Wait for all of the hosts to be enrolled
EXPECTED=3
ENROLLMENT_START=$(date +%s)
until [ $(./build/fleetctl get hosts --json | grep -v "No hosts found" | wc -l | tee hostcount) -ge $EXPECTED ]; do
CURRENT_TIME=$(date +%s)
ELAPSED=$((CURRENT_TIME - ENROLLMENT_START))
echo -n "Waiting for hosts to enroll (${ELAPSED}s): "
cat hostcount | xargs echo -n
echo " / $EXPECTED"
# Show diagnostic info every 60 seconds
if [ $((ELAPSED % 60)) -lt 10 ]; then
./build/fleetctl get hosts --json || true
fi
sleep 10
done
echo "Success! $EXPECTED hosts enrolled."
- name: Show enrolled hosts
if: always()
run: |
./build/fleetctl get hosts --json
- name: Slack Notification
if: failure()
uses: slackapi/slack-github-action@e28cf165c92ffef168d23c5c9000cffc8a25e117 # v1.24.0
with:
payload: |
{
"text": "${{ job.status }}\n${{ github.event.pull_request.html_url || github.event.head.html_url }}",
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "Integration test result: ${{ job.status }}\nhttps://github.com/fleetdm/fleet/actions/runs/${{ github.run_id }}\n${{ github.event.pull_request.html_url || github.event.head.html_url }}"
}
}
]
}
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_G_HELP_ENGINEERING_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
- name: Cleanup tunnel
if: always()
run: cloudflared tunnel --origincert cert.pem delete --force ${{ needs.gen.outputs.subdomain }}
- name: Upload cloudflared logs
if: always()
uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b # v4.5.0
with:
name: cloudflared.log
path: cloudflared.log
login:
runs-on: ubuntu-latest
needs: gen
outputs:
token: ${{ steps.login.outputs.token }}
steps:
# Download fleet and fleetctl binaries from last successful build on main
- name: Harden Runner
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
- name: Download binaries
uses: dawidd6/action-download-artifact@5e780fc7bbd0cac69fc73271ed86edf5dcb72d67
with:
workflow: build-binaries.yaml
branch: main
name: build
path: build
check_artifacts: true
# Login only here and share the token because otherwise we could hit rate limits.
- id: login
name: Attempt login
timeout-minutes: 5
run: |
chmod +x ./build/fleetctl
./build/fleetctl config set --address ${{ needs.gen.outputs.address }}
# Wait for Fleet server to be reachable first
echo "Waiting for Fleet server to be ready..."
ATTEMPT=0
until curl -s -o /dev/null -w "%{http_code}" ${{ needs.gen.outputs.address }}/healthz | grep -q "200"; do
ATTEMPT=$((ATTEMPT + 1))
if [ $ATTEMPT -ge 60 ]; then
echo "ERROR: Fleet server not reachable after 5 minutes"
exit 1
fi
echo "Waiting for server... attempt $ATTEMPT/60"
sleep 5
done
echo "Fleet server is responding, attempting login..."
ATTEMPT=0
until ./build/fleetctl login --email admin@example.com --password preview1337#; do
ATTEMPT=$((ATTEMPT + 1))
if [ $ATTEMPT -ge 30 ]; then
echo "ERROR: Failed to login after $ATTEMPT attempts"
exit 1
fi
echo "Login attempt $ATTEMPT failed, retrying in 5s..."
sleep 5
done
TOKEN=$(cat ~/.fleet/config| grep token | awk '{ print $2 }')
echo "token=$TOKEN" >> $GITHUB_OUTPUT
orbit-macos:
timeout-minutes: 10
strategy:
matrix:
# To run multiple VMs that have the same UUID we need to implement
# https://github.com/fleetdm/fleet/issues/8021 (otherwise orbit and osqueryd
# in the same host are enrolled as two hosts in Fleet).
# Until then we will just test the `stable` channel in all components.
#
# Alternatively, we can bring back the `edge` channel when we decide to upgrade
# our worker to macOS 13 in the future, as they changed the virtualization
# layer for 13 and now it has random UUIDs (https://github.com/actions/runner-images/issues/7591).
orbit-channel: [ 'stable' ]
osqueryd-channel: [ 'stable' ]
desktop-channel: [ 'stable' ]
runs-on: macos-latest
needs: [gen, login]
steps:
- name: Harden Runner
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
- name: Checkout Code
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install dependencies
run: |
npm install -g fleetctl
fleetctl config set --address ${{ needs.gen.outputs.address }} --token ${{ needs.login.outputs.token }}
- name: Wait until fleet address is reachable and fleet responds
run: |
until curl -v -fail ${{ needs.gen.outputs.address }}/version;
do
echo "Awaiting until fleet server responds..."
sleep 10
done
- name: Install Orbit
run: |
sudo hostname macos-orbit-${{ matrix.orbit-channel }}-osqueryd-${{ matrix.osqueryd-channel }}
SECRET_JSON=$(fleetctl get enroll_secret --json --debug)
echo $SECRET_JSON
SECRET=$(echo $SECRET_JSON | jq -r '.spec.secrets[0].secret')
echo "Secret: $SECRET"
echo "Hostname: $(hostname -s)"
fleetctl package --type pkg --fleet-url=${{ needs.gen.outputs.address }} --enroll-secret=$SECRET --orbit-channel=${{ matrix.orbit-channel }} --osqueryd-channel=${{ matrix.osqueryd-channel }} --desktop-channel=${{ matrix.desktop-channel }} --fleet-desktop --debug
sudo installer -pkg fleet-osquery.pkg -target /
ENROLLMENT_START=$(date +%s)
until fleetctl get hosts | grep -iF $(hostname -s);
do
CURRENT_TIME=$(date +%s)
ELAPSED=$((CURRENT_TIME - ENROLLMENT_START))
echo "Awaiting enrollment... (${ELAPSED}s)"
sleep 10
done
- name: Collect orbit logs
if: always()
run: |
mkdir orbit-logs
sudo cp /var/log/orbit/* orbit-logs/
- name: Upload Orbit logs
if: always()
uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b # v4.5.0
with:
name: orbit-macos-${{ matrix.orbit-channel }}-${{ matrix.osqueryd-channel }}-${{ matrix.desktop-channel }}-logs
path: |
orbit-logs
- name: Uninstall Orbit
run: |
sudo ./it-and-security/lib/macos/scripts/uninstall-fleetd-macos.sh
orbit-ubuntu:
timeout-minutes: 10
strategy:
matrix:
# To run multiple VMs that have the same UUID we need to implement
# https://github.com/fleetdm/fleet/issues/8021 (otherwise orbit and osqueryd
# in the same host are enrolled as two hosts in Fleet).
# Until then we will just test the `stable` channel in all components.
orbit-channel: [ 'stable' ]
osqueryd-channel: [ 'stable' ]
desktop-channel: [ 'stable' ]
runs-on: ubuntu-latest
needs: [gen, login]
steps:
- name: Harden Runner
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
- name: Install dependencies
run: |
npm install -g fleetctl
fleetctl config set --address ${{ needs.gen.outputs.address }} --token ${{ needs.login.outputs.token }}
- name: Checkout Code
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
with:
go-version-file: 'go.mod'
- name: Build Fleetctl
run: make fleetctl
- name: Wait until fleet address is reachable and fleet responds
run: |
until curl -v -fail ${{ needs.gen.outputs.address }}/version; do
echo "Awaiting until fleet server responds..."
sleep 10
done
- name: Install Orbit
run: |
sudo hostname ubuntu-orbit-${{ matrix.orbit-channel }}-osqueryd-${{ matrix.osqueryd-channel }}
chmod +x ./build/fleetctl
SECRET_JSON=$(fleetctl get enroll_secret --json --debug)
echo $SECRET_JSON
SECRET=$(echo $SECRET_JSON | jq -r '.spec.secrets[0].secret')
echo "Secret: $SECRET"
echo "Hostname: $(hostname -s)"
./build/fleetctl package --type deb --fleet-url=${{ needs.gen.outputs.address }} --enroll-secret=$SECRET --orbit-channel=${{ matrix.orbit-channel }} --osqueryd-channel=${{ matrix.osqueryd-channel }} --desktop-channel=${{ matrix.desktop-channel }} --fleet-desktop --debug
sudo dpkg -i fleet-osquery*
ENROLLMENT_START=$(date +%s)
until fleetctl get hosts | grep -iF $(hostname -s); do
CURRENT_TIME=$(date +%s)
ELAPSED=$((CURRENT_TIME - ENROLLMENT_START))
echo "Awaiting enrollment... (${ELAPSED}s)"
sudo systemctl status orbit.service || true
sleep 10
done
- name: Collect orbit logs
if: always()
run: |
sudo journalctl -u orbit.service > orbit-logs
- name: Upload Orbit logs
if: always()
uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b # v4.5.0
with:
name: orbit-ubuntu-${{ matrix.orbit-channel }}-${{ matrix.osqueryd-channel }}-${{ matrix.desktop-channel }}-logs
path: |
orbit-logs
- name: Uninstall Orbit
run: |
sudo apt remove fleet-osquery -y
orbit-windows-build:
timeout-minutes: 10
strategy:
matrix:
# To run multiple VMs that have the same UUID we need to implement
# https://github.com/fleetdm/fleet/issues/8021 (otherwise orbit and osqueryd
# in the same host are enrolled as two hosts in Fleet).
# Until then we will just test the `stable` channel in all components.
orbit-channel: [ 'stable' ]
osqueryd-channel: [ 'stable' ]
desktop-channel: [ 'stable' ]
runs-on: ubuntu-latest
needs: [gen, login]
steps:
- name: Harden Runner
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
- name: Install dependencies
run: |
docker pull fleetdm/wix:latest &
npm install -g fleetctl
fleetctl config set --address ${{ needs.gen.outputs.address }} --token ${{ needs.login.outputs.token }}
- name: Wait until fleet address is reachable and fleet responds
run: |
until curl -v -fail ${{ needs.gen.outputs.address }}/version;
do
echo "Awaiting until fleet server responds..."
sleep 10
done
- name: Build Orbit
run: |
SECRET_JSON=$(fleetctl get enroll_secret --json --debug)
echo $SECRET_JSON
SECRET=$(echo $SECRET_JSON | jq -r '.spec.secrets[0].secret')
echo "Secret: $SECRET"
echo "Hostname: $(hostname -s)"
fleetctl package --type msi --fleet-url=${{ needs.gen.outputs.address }} --enroll-secret=$SECRET --orbit-channel=${{ matrix.orbit-channel }} --osqueryd-channel=${{ matrix.osqueryd-channel }} --desktop-channel=${{ matrix.desktop-channel }} --fleet-desktop --debug
mv fleet-osquery.msi orbit-${{ matrix.orbit-channel }}-osqueryd-${{ matrix.osqueryd-channel }}-desktop-${{ matrix.desktop-channel }}.msi
- name: Upload MSI
uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b # v4.5.0
with:
name: orbit-${{ matrix.orbit-channel }}-osqueryd-${{ matrix.osqueryd-channel }}-desktop-${{ matrix.desktop-channel }}.msi
path: orbit-${{ matrix.orbit-channel }}-osqueryd-${{ matrix.osqueryd-channel }}-desktop-${{ matrix.desktop-channel }}.msi
orbit-windows:
timeout-minutes: 10
strategy:
matrix:
# To run multiple VMs that have the same UUID we need to implement
# https://github.com/fleetdm/fleet/issues/8021 (otherwise orbit and osqueryd
# in the same host are enrolled as two hosts in Fleet).
# Until then we will just test the `stable` channel in all components.
orbit-channel: [ 'stable' ]
osqueryd-channel: [ 'stable' ]
desktop-channel: [ 'stable' ]
needs: [gen, login, orbit-windows-build]
runs-on: windows-latest
steps:
- name: Harden Runner
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
- name: Install dependencies
shell: bash
run: |
npm install -g fleetctl
fleetctl config set --address ${{ needs.gen.outputs.address }} --token ${{ needs.login.outputs.token }} --tls-skip-verify
- name: Download MSI
id: download
uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: orbit-${{ matrix.orbit-channel }}-osqueryd-${{ matrix.osqueryd-channel }}-desktop-${{ matrix.desktop-channel }}.msi
- name: Install Orbit
shell: cmd
run: |
msiexec /i ${{steps.download.outputs.download-path}}\orbit-${{ matrix.orbit-channel }}-osqueryd-${{ matrix.osqueryd-channel }}-desktop-${{ matrix.desktop-channel }}.msi /quiet /passive /lv log.txt
sleep 120
# We can't very accurately check the install on these Windows hosts since the hostnames tend to
# overlap and we can't control the hostnames. Instead we just return and have the run-server job
# wait until the expected number of hosts enroll.
- name: Upload orbit install log
if: always()
uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b # v4.5.0
with:
name: msiexec-install-log
path: log.txt
- name: Upload Orbit logs
if: always()
uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b # v4.5.0
with:
name: orbit-windows-${{ matrix.orbit-channel }}-${{ matrix.osqueryd-channel }}-${{ matrix.desktop-channel }}-logs
path: C:\Windows\system32\config\systemprofile\AppData\Local\FleetDM\Orbit\Logs\orbit-osquery.log

View file

@ -103,7 +103,7 @@ jobs:
role-to-assume: ${{env.AWS_IAM_ROLE}}
aws-region: ${{ env.AWS_REGION }}
- name: Set up Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'
- uses: hashicorp/setup-terraform@633666f66e0061ca3b725c73b2ec20cd13a8fdd1 # v2.0.3

View file

@ -21,15 +21,20 @@ on:
type: string
default: 0
required: true
task_size:
description: "CPU and Memory setting for osquery-perf containers. Example: {\"cpu\":\"4098\",\"memory\":\"8192\"}"
type: string
default: "{\"cpu\":\"4096\",\"memory\":\"8192\"}"
required: true
sleep_time:
description: "Sleep time (in seconds) between batched osquery container deployments"
type: string
default: 60
default: 300
required: true
extra_flags:
description: "Extra flags for osquery-perf. Example: [\"--orbit_prob\", \"0.0\"]"
description: "Extra flags for osquery-perf. Example: [\"--orbit_prob\", \"0.0\", \"--host_count\", \"2000\", \"--start_period\", \"20m\"]"
type: string
default: "[\"--orbit_prob\", \"0.0\"]"
default: "[\"--orbit_prob\", \"0.0\", \"--host_count\", \"2000\", \"--start_period\", \"20m\"]"
required: false
terraform_action:
description: Dry run only? No "terraform apply"
@ -58,6 +63,7 @@ env:
TF_VAR_extra_flags: "${{ inputs.extra_flags || '[]' }}"
TF_VAR_loadtest_containers: "${{ inputs.loadtest_containers }}"
TF_VAR_git_tag_branch: "${{ inputs.git_tag_branch }}"
TF_VAR_task_size: "${{ inputs.task_size }}"
permissions:
id-token: write
@ -82,7 +88,7 @@ jobs:
aws-region: ${{ env.AWS_REGION }}
role-duration-seconds: 10800
- name: Set up Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'
- uses: hashicorp/setup-terraform@633666f66e0061ca3b725c73b2ec20cd13a8fdd1 # v2.0.3
@ -150,7 +156,7 @@ jobs:
if [[ `terraform workspace show` = "${{ inputs.terraform_workspace }}" ]];
then
echo "TERRAFORM WORKSPACE: MATCHES - ${{ inputs.terraform_workspace }}"
./enroll.sh ${{ inputs.git_tag_branch }} ${{ inputs.loadtest_containers_starting_index}} ${{ inputs.loadtest_containers }} ${{ inputs.sleep_time }}
./enroll.sh ${{ inputs.git_tag_branch }} "${{ inputs.task_size }}" ${{ inputs.loadtest_containers_starting_index}} ${{ inputs.loadtest_containers }} ${{ inputs.sleep_time }}
else
echo "TERRAFORM WORKSPACE: DOES NOT MATCH INPUT - ${{ inputs.terraform_workspace }}"
fi

View file

@ -51,7 +51,7 @@ jobs:
role-to-assume: ${{env.AWS_IAM_ROLE}}
aws-region: ${{ env.AWS_REGION }}
- name: Set up Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'
- uses: hashicorp/setup-terraform@633666f66e0061ca3b725c73b2ec20cd13a8fdd1 # v2.0.3

View file

@ -0,0 +1,254 @@
name: Product & Engineering Handbook Weekly Summary
on:
schedule:
- cron: '0 13 * * 1' # Every Monday at 8am EST (1pm UTC)
workflow_dispatch:
permissions:
contents: read
models: read
pull-requests: read
defaults:
run:
shell: bash
jobs:
summarize:
runs-on: ubuntu-latest
steps:
- name: Harden Runner
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
- name: Checkout
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
with:
fetch-depth: 0
- name: Collect handbook diffs
id: diffs
run: |
SINCE_DATE=$(date -d '7 days ago' '+%Y-%m-%d' 2>/dev/null || date -v-7d '+%Y-%m-%d')
echo "since_date=$SINCE_DATE" >> "$GITHUB_OUTPUT"
HANDBOOK_PATHS="handbook/engineering/ handbook/product-design/"
# Get commit log for the period
COMMITS=$(git log --since="$SINCE_DATE" --pretty=format:'- %h %s (%an, %as)' -- $HANDBOOK_PATHS)
if [ -z "$COMMITS" ]; then
echo "has_changes=false" >> "$GITHUB_OUTPUT"
echo "No handbook changes in the last 7 days."
exit 0
fi
echo "has_changes=true" >> "$GITHUB_OUTPUT"
# Get the diff. Use FIRST_COMMIT^ as the base when possible;
# fall back to diffing against the empty tree if the commit has no
# parent (root commit) or is HEAD itself.
FIRST_COMMIT=$(git log --since="$SINCE_DATE" --reverse --pretty=format:'%H' -- $HANDBOOK_PATHS | head -1)
EMPTY_TREE=$(git hash-object -t tree /dev/null)
if git rev-parse "${FIRST_COMMIT}^" >/dev/null 2>&1; then
DIFF_BASE="${FIRST_COMMIT}^"
else
DIFF_BASE="$EMPTY_TREE"
fi
DIFF=$(git diff "${DIFF_BASE}..HEAD" -- $HANDBOOK_PATHS)
# Truncate diff to ~80K chars to stay within model context limits
DIFF=$(echo "$DIFF" | head -c 80000)
# Write to files for next steps
echo "$COMMITS" > /tmp/commits.txt
echo "$DIFF" > /tmp/diff.txt
- name: Collect PR context
if: steps.diffs.outputs.has_changes == 'true'
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GH_REPO: ${{ github.repository }}
run: |
HANDBOOK_PATHS="handbook/engineering/ handbook/product-design/"
SINCE_DATE="${{ steps.diffs.outputs.since_date }}"
# Get unique commit SHAs for handbook changes
COMMIT_SHAS=$(git log --since="$SINCE_DATE" --pretty=format:'%H' -- $HANDBOOK_PATHS | sort -u)
echo "PR_CONTEXT:" > /tmp/pr_context.txt
# Track PRs we've already processed to avoid duplicates
declare -A SEEN_PRS
for SHA in $COMMIT_SHAS; do
# Find the PR that introduced this commit
PR_JSON=$(gh api "repos/${GH_REPO}/commits/${SHA}/pulls" \
--jq '.[0] | {number, title, html_url, body}' 2>/dev/null || echo "")
if [ -z "$PR_JSON" ] || [ "$PR_JSON" = "null" ]; then
continue
fi
PR_NUM=$(echo "$PR_JSON" | jq -r '.number')
# Skip if we already processed this PR
if [ -n "${SEEN_PRS[$PR_NUM]:-}" ]; then
continue
fi
SEEN_PRS[$PR_NUM]=1
PR_TITLE=$(echo "$PR_JSON" | jq -r '.title')
PR_URL=$(echo "$PR_JSON" | jq -r '.html_url')
# Truncate PR body to 500 chars to keep context manageable
PR_BODY=$(echo "$PR_JSON" | jq -r '.body // ""' | head -c 500)
echo "" >> /tmp/pr_context.txt
echo "---" >> /tmp/pr_context.txt
echo "PR #${PR_NUM}: ${PR_TITLE}" >> /tmp/pr_context.txt
echo "URL: ${PR_URL}" >> /tmp/pr_context.txt
echo "Description: ${PR_BODY}" >> /tmp/pr_context.txt
done
- name: Summarize with AI
if: steps.diffs.outputs.has_changes == 'true'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SINCE_DATE: ${{ steps.diffs.outputs.since_date }}
run: |
COMMITS=$(cat /tmp/commits.txt)
DIFF=$(cat /tmp/diff.txt)
PR_CONTEXT=$(cat /tmp/pr_context.txt)
# Build the prompt
PROMPT="You are summarizing changes to a company handbook for a Slack post.
Below are the commits, associated pull requests, and diffs made to the Product Design and Engineering sections of the Fleet handbook in the past week (since ${SINCE_DATE}).
COMMITS:
${COMMITS}
PULL REQUESTS (with descriptions for additional context):
${PR_CONTEXT}
DIFF:
${DIFF}
Write a concise, well-organized summary suitable for posting in Slack. Format it using Slack mrkdwn syntax (use *bold* not **bold**, use • for bullets).
Group changes by section (Engineering vs Product Design) if both have changes.
Focus on WHAT changed and WHY it matters — use the PR descriptions for context on the intent behind changes. Skip trivial whitespace or formatting-only changes.
For each significant change, include a link to the relevant PR using Slack link syntax: <URL|PR #123>.
Keep it under 3000 characters. Do not include a greeting or sign-off."
# Call GitHub Models API (OpenAI-compatible endpoint, no extra secrets needed)
RESPONSE=$(jq -n --arg prompt "$PROMPT" \
'{
"model": "openai/gpt-4.1",
"max_tokens": 1024,
"messages": [{"role": "user", "content": $prompt}]
}' | curl -sf -L -X POST "https://models.github.ai/inference/chat/completions" \
-H "Content-Type: application/json" \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${GITHUB_TOKEN}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
-d @-)
# Extract the text content (OpenAI-compatible response format)
SUMMARY=$(echo "$RESPONSE" | jq -r '.choices[0].message.content // empty')
if [ -z "$SUMMARY" ]; then
echo "::error::Failed to get summary from GitHub Models API"
echo "$RESPONSE" | jq . || echo "$RESPONSE"
exit 1
fi
echo "$SUMMARY" > /tmp/summary.txt
- name: Post summary to Slack
if: steps.diffs.outputs.has_changes == 'true'
env:
SLACK_WEBHOOK_URL: ${{ secrets.TEST_SLACK_PRODUCT_ENG_HANDBOOK_SUMMARY_WEBHOOK_URL }}
SINCE_DATE: ${{ steps.diffs.outputs.since_date }}
run: |
if [ -z "$SLACK_WEBHOOK_URL" ]; then
echo "::error::TEST_SLACK_PRODUCT_ENG_HANDBOOK_SUMMARY_WEBHOOK_URL secret is not set or is empty. Add it in repo Settings → Secrets → Actions."
exit 1
fi
SUMMARY=$(cat /tmp/summary.txt)
# Slack section.text.mrkdwn has a 3000 char limit. Split the
# summary into chunks on line boundaries in bash, then build
# a section block per chunk via jq.
MAX_LEN=2900
CHUNKS=()
CURRENT=""
while IFS= read -r LINE || [ -n "$LINE" ]; do
if [ $(( ${#CURRENT} + ${#LINE} + 1 )) -gt "$MAX_LEN" ] && [ -n "$CURRENT" ]; then
CHUNKS+=("$CURRENT")
CURRENT="$LINE"
else
if [ -n "$CURRENT" ]; then
CURRENT="${CURRENT}"$'\n'"${LINE}"
else
CURRENT="$LINE"
fi
fi
done <<< "$SUMMARY"
[ -n "$CURRENT" ] && CHUNKS+=("$CURRENT")
# Build a JSON array of chunks, preserving newlines within each chunk
CHUNKS_JSON=$(jq -n '$ARGS.positional' --args -- "${CHUNKS[@]}")
# Build Slack Block Kit payload
FALLBACK="Product & Engineering handbook weekly summary (since ${SINCE_DATE})"
jq -n --arg fallback "$FALLBACK" --arg since "$SINCE_DATE" --argjson chunks "$CHUNKS_JSON" \
'{
"text": $fallback,
"blocks": (
[
{"type": "header", "text": {"type": "plain_text", "text": "📋 Product & Engineering Handbook Weekly Summary", "emoji": true}},
{"type": "context", "elements": [{"type": "mrkdwn", "text": ("Changes since " + $since)}]},
{"type": "divider"}
] + [
$chunks[] | {"type": "section", "text": {"type": "mrkdwn", "text": .}}
]
)
}' | curl -sf -X POST "$SLACK_WEBHOOK_URL" \
-H "Content-Type: application/json" \
-d @-
- name: Post no-changes notice to Slack
if: steps.diffs.outputs.has_changes == 'false'
env:
SLACK_WEBHOOK_URL: ${{ secrets.TEST_SLACK_PRODUCT_ENG_HANDBOOK_SUMMARY_WEBHOOK_URL }}
run: |
if [ -z "$SLACK_WEBHOOK_URL" ]; then
echo "::error::TEST_SLACK_PRODUCT_ENG_HANDBOOK_SUMMARY_WEBHOOK_URL secret is not set or is empty. Add it in repo Settings → Secrets → Actions."
exit 1
fi
jq -n '{
"text": "Product & Engineering handbook weekly summary — no changes this week.",
"blocks": [
{
"type": "header",
"text": {
"type": "plain_text",
"text": "📋 Product & Engineering Handbook Weekly Summary",
"emoji": true
}
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "No changes to the Product Design or Engineering handbook sections this week. 🎉"
}
}
]
}' | curl -sf -X POST "$SLACK_WEBHOOK_URL" \
-H "Content-Type: application/json" \
-d @-

View file

@ -31,7 +31,7 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'

View file

@ -67,7 +67,7 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install Go
uses: actions/setup-go@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32 # v5.0.2
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: "go.mod"

View file

@ -40,12 +40,12 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'
- name: Login to Docker Hub
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # v2.1.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_ACCESS_TOKEN }}

View file

@ -57,7 +57,7 @@ jobs:
fetch-depth: 0
- name: Install Go
uses: actions/setup-go@0c52d547c9bc32b1aa3301fd7a9cb496313a4491 # v5.0.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'

View file

@ -0,0 +1,121 @@
name: Sync Maintained Apps Outputs to R2
# Synchronizes ee/maintained-apps/outputs folder to Cloudflare R2 bucket using AWS CLI.
# Triggers on commits to main that modify files in the outputs directory, or manually via workflow_dispatch.
on:
push:
branches: [main]
paths: ["ee/maintained-apps/outputs/**"]
workflow_dispatch:
inputs:
dry_run:
description: 'Preview sync without uploading to R2'
required: false
default: 'false'
type: boolean
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}
cancel-in-progress: true
defaults:
run:
shell: bash
permissions:
contents: read
env:
R2_BUCKET: "maintained-apps"
AWS_ACCESS_KEY_ID: ${{ secrets.R2_MAINTAINED_APPS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.R2_MAINTAINED_APPS_ACCESS_KEY_SECRET }}
R2_ENDPOINT: ${{ secrets.R2_ENDPOINT }}
AWS_MAX_ATTEMPTS: "10"
AWS_RETRY_MODE: standard
# Dry-run mode: enabled via input OR automatically for non-main branches
DRY_RUN: ${{ github.event.inputs.dry_run == 'true' || github.ref != 'refs/heads/main' }}
jobs:
sync-to-r2:
runs-on: ubuntu-latest
steps:
- name: Harden Runner
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
- name: Checkout Repository
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Verify Source Directory Exists
run: |
if [ ! -d "./ee/maintained-apps/outputs" ]; then
echo "ERROR: Source directory ./ee/maintained-apps/outputs not found!" >&2
exit 1
fi
FILE_COUNT=$(find ./ee/maintained-apps/outputs -type f | wc -l)
echo "Found $FILE_COUNT files to sync"
- name: Sync to R2 Bucket (${{ env.DRY_RUN == 'true' && 'DRY RUN' || 'LIVE' }})
run: |
set -euo pipefail
echo "Syncing ee/maintained-apps/outputs → s3://${{ env.R2_BUCKET }}"
echo "Endpoint: ${R2_ENDPOINT}"
echo "AWS_MAX_ATTEMPTS: ${AWS_MAX_ATTEMPTS}"
echo "DRY_RUN: ${DRY_RUN}"
# Build sync command
SYNC_ARGS=(--delete)
if [ "${DRY_RUN}" = "true" ]; then
SYNC_ARGS+=(--dryrun)
echo "🔍 DRY RUN MODE - No files will be uploaded"
fi
aws s3 sync "${SYNC_ARGS[@]}" \
./ee/maintained-apps/outputs \
s3://${{ env.R2_BUCKET }}/manifests \
--endpoint-url="${R2_ENDPOINT}" || {
EXIT_CODE=$?
echo "❌ Sync failed with exit code: $EXIT_CODE" >&2
exit $EXIT_CODE
}
if [ "${DRY_RUN}" = "true" ]; then
echo "✅ Dry run completed - review output above for files that would be synced"
else
echo "✅ Sync completed successfully!"
fi
- name: Notify Slack on Failure
if: failure()
uses: slackapi/slack-github-action@e28cf165c92ffef168d23c5c9000cffc8a25e117 # v1.24.0
with:
payload: |
{
"text": ":rotating_light: R2 Sync Failed",
"blocks": [
{
"type": "section",
"fields": [
{"type": "mrkdwn", "text": "*Workflow:* ${{ github.workflow }}"},
{"type": "mrkdwn", "text": "*Commit:* `${{ github.sha }}`"}
]
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "Failed to sync `ee/maintained-apps/outputs` to R2 bucket\n\nView logs: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
}
}
]
}
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_G_HELP_P1_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK

View file

@ -72,7 +72,7 @@ jobs:
done
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'

View file

@ -0,0 +1,58 @@
on:
pull_request:
paths:
- 'ee/fleet-agent-downloader/**'
- '.github/workflows/test-fleet-agent-downloader-changes.yml'
# This allows a subsequently queued workflow run to interrupt previous runs
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id}}
cancel-in-progress: true
permissions:
contents: read
jobs:
build:
permissions:
contents: read
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [20.x]
steps:
- name: Harden Runner
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
- uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
# Set the Node.js version
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@5e21ff4d9bc1a8cf6de233a3057d20ec6b3fb69d # v3.8.1
with:
node-version: ${{ matrix.node-version }}
# Now start building!
# > …but first, get a little crazy for a sec and delete the top-level package.json file
# > i.e. the one used by the Fleet server. This is because require() in node will go
# > hunting in ancestral directories for missing dependencies, and since some of the
# > bundled transpiler tasks sniff for package availability using require(), this trips
# > up when it encounters another Node universe in the parent directory.
- run: rm -rf package.json package-lock.json node_modules/
# > Turns out there's a similar issue with how eslint plugins are looked up, so we
# > delete the top level .eslintrc file too.
- run: rm -f .eslintrc.js
# Get dependencies (including dev deps)
- run: cd ee/fleet-agent-downloader/ && npm install
# Run sanity checks
- run: cd ee/fleet-agent-downloader/ && npm test
# Compile assets
- run: cd ee/fleet-agent-downloader/ && npm run build-for-prod

View file

@ -47,7 +47,7 @@ jobs:
path: fleet
- name: Setup Go
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: "fleet/go.mod"

View file

@ -43,7 +43,7 @@ jobs:
path: fleet
- name: Setup Go
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: "fleet/go.mod"

View file

@ -47,7 +47,7 @@ jobs:
path: fleet
- name: Setup Go
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: "fleet/go.mod"
@ -96,6 +96,7 @@ jobs:
"has_windows_apps=false" | Out-File -FilePath $env:GITHUB_OUTPUT -Encoding utf8 -Append
"has_google_chrome=false" | Out-File -FilePath $env:GITHUB_OUTPUT -Encoding utf8 -Append
"has_7zip=false" | Out-File -FilePath $env:GITHUB_OUTPUT -Encoding utf8 -Append
"has_firefox=false" | Out-File -FilePath $env:GITHUB_OUTPUT -Encoding utf8 -Append
exit 0
}
@ -107,6 +108,7 @@ jobs:
"has_windows_apps=false" | Out-File -FilePath $env:GITHUB_OUTPUT -Encoding utf8 -Append
"has_google_chrome=false" | Out-File -FilePath $env:GITHUB_OUTPUT -Encoding utf8 -Append
"has_7zip=false" | Out-File -FilePath $env:GITHUB_OUTPUT -Encoding utf8 -Append
"has_firefox=false" | Out-File -FilePath $env:GITHUB_OUTPUT -Encoding utf8 -Append
Write-Host "No windows apps changed, skipping Windows workflow"
} else {
"has_windows_apps=true" | Out-File -FilePath $env:GITHUB_OUTPUT -Encoding utf8 -Append
@ -129,6 +131,14 @@ jobs:
} else {
"has_7zip=false" | Out-File -FilePath $env:GITHUB_OUTPUT -Encoding utf8 -Append
}
# Check if firefox/windows or firefox@esr/windows is in the changed apps
if (("firefox/windows" -in $windowsSlugs) -or ("firefox@esr/windows" -in $windowsSlugs)) {
"has_firefox=true" | Out-File -FilePath $env:GITHUB_OUTPUT -Encoding utf8 -Append
Write-Host "Firefox detected in changed apps"
} else {
"has_firefox=false" | Out-File -FilePath $env:GITHUB_OUTPUT -Encoding utf8 -Append
}
}
shell: pwsh
@ -232,6 +242,90 @@ jobs:
}
shell: pwsh
- name: Remove pre-installed Firefox
if: steps.check-windows-apps.outputs.has_windows_apps == 'true' && steps.check-windows-apps.outputs.has_firefox == 'true'
run: |
Write-Host "Listing all installed packages containing 'Firefox':"
Get-Package | Where-Object { $_.Name -like "*Firefox*" } | ForEach-Object {
Write-Host " - $($_.Name) (Version: $($_.Version))"
}
$uninstallPaths = @(
"HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\*",
"HKLM:\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\*"
)
$found = $false
foreach ($path in $uninstallPaths) {
$entries = Get-ItemProperty $path -ErrorAction SilentlyContinue | Where-Object { $_.DisplayName -like "*Mozilla Firefox*" }
foreach ($entry in $entries) {
if (-not $entry) { continue }
$found = $true
Write-Host "Found Firefox: $($entry.DisplayName)"
$uninstallString = if ($entry.QuietUninstallString) {
$entry.QuietUninstallString
} elseif ($entry.UninstallString) {
$entry.UninstallString
} else {
$null
}
if ($uninstallString) {
Write-Host "Uninstall string: $uninstallString"
try {
$splitArgs = $uninstallString.Split('"')
if ($splitArgs.Length -ge 3) {
$exePath = $splitArgs[1]
Write-Host "Uninstalling Firefox via: $exePath /S"
Start-Process -FilePath $exePath -ArgumentList "/S" -Wait -NoNewWindow
Write-Host "Successfully removed $($entry.DisplayName)"
} else {
Write-Host "Uninstalling Firefox via: $uninstallString /S"
Start-Process -FilePath $uninstallString -ArgumentList "/S" -Wait -NoNewWindow
Write-Host "Successfully removed $($entry.DisplayName)"
}
} catch {
Write-Host "Failed to remove Firefox: $($_.Exception.Message)"
}
} else {
Write-Host "Firefox uninstall string not found in registry entry"
}
}
}
if (-not $found) {
Write-Host "Firefox not found in registry"
}
# Kill any lingering Firefox/Mozilla processes
Write-Host "Stopping any lingering Firefox processes..."
Get-Process -Name "firefox","plugin-container","updater","maintenanceservice*","helper" -ErrorAction SilentlyContinue | ForEach-Object {
Write-Host " Killing process: $($_.Name) (PID: $($_.Id))"
Stop-Process -Id $_.Id -Force -ErrorAction SilentlyContinue
}
Start-Sleep -Seconds 10
# Force-remove leftover Firefox directories from Program Files
$firefoxDirs = @(
"C:\Program Files\Mozilla Firefox",
"C:\Program Files (x86)\Mozilla Firefox",
"C:\Program Files\Mozilla Maintenance Service"
)
foreach ($dir in $firefoxDirs) {
if (Test-Path $dir) {
Write-Host "Removing leftover directory: $dir"
Remove-Item -Path $dir -Recurse -Force -ErrorAction SilentlyContinue
if (Test-Path $dir) {
Write-Host "WARNING: Failed to fully remove $dir"
} else {
Write-Host "Removed $dir"
}
}
}
shell: pwsh
- name: Filter apps.json and verify changed apps
if: steps.check-windows-apps.outputs.has_windows_apps == 'true'
run: |

View file

@ -43,7 +43,7 @@ jobs:
path: fleet
- name: Setup Go
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: "fleet/go.mod"

View file

@ -68,7 +68,7 @@ jobs:
if: github.event_name == 'schedule'
strategy:
matrix:
mysql: ["mysql:8.0.39", "mysql:8.4.8"]
mysql: ["mysql:8.0.42", "mysql:8.4.8"]
uses: ./.github/workflows/test-go-suite.yaml
with:
suite: activity

View file

@ -111,7 +111,7 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install Go
uses: actions/setup-go@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32 # v5.0.2
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'
@ -227,6 +227,13 @@ jobs:
attempt=$((attempt + 1))
done
- name: Wait for LocalStack
if: ${{ env.NEED_DOCKER && contains(env.DOCKER_COMMAND, 'localstack') }}
run: |
echo "Waiting for LocalStack..."
timeout 60 bash -c 'until curl -sf http://localhost:4566/_localstack/health; do sleep 2; done'
echo "LocalStack is ready"
- name: Generate test schema
if: ${{ env.GENERATE_TEST_SCHEMA }}
run: make test-schema
@ -245,7 +252,7 @@ jobs:
S3_STORAGE_TEST=1 \
SAML_IDP_TEST=1 \
MAIL_TEST=1 \
AWS_ENDPOINT_URL="http://localhost:4566" \
AWS_ENDPOINT_URL="http://127.0.0.1:4566" \
AWS_REGION=us-east-1 \
NETWORK_TEST_GITHUB_TOKEN=${{ secrets.FLEET_RELEASE_GITHUB_PAT }} \
CI_TEST_PKG="${{ env.CI_TEST_PKG }}" \

113
.github/workflows/test-go-windows.yml vendored Normal file
View file

@ -0,0 +1,113 @@
name: Go tests (Windows)
on:
push:
branches:
- main
- patch-*
- prepare-*
paths:
- "orbit/**.go"
- "go.mod"
- "go.sum"
- ".github/workflows/test-go-windows.yml"
pull_request:
paths:
- "orbit/**.go"
- "go.mod"
- "go.sum"
- ".github/workflows/test-go-windows.yml"
workflow_dispatch: # Manual
schedule:
- cron: '0 4 * * *'
# This allows a subsequently queued workflow run to interrupt previous runs
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id}}
cancel-in-progress: true
defaults:
run:
shell: pwsh
permissions:
contents: read
jobs:
test-go-windows:
runs-on: windows-latest
timeout-minutes: 30
steps:
- name: Harden Runner
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
- name: Checkout Code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install Go
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'
- name: Run Windows-specific Go tests
run: |
$packages = @(
"./orbit/pkg/bitlocker/..."
"./orbit/pkg/keystore/..."
"./orbit/pkg/platform/..."
"./orbit/pkg/table/bitlocker_key_protectors/..."
"./orbit/pkg/table/cis_audit/..."
"./orbit/pkg/table/windowsupdatetable/..."
)
Write-Host "Running Windows-specific Go tests for packages:"
$packages | ForEach-Object { Write-Host " $_" }
go test -v -timeout=10m $packages 2>&1 | Tee-Object -FilePath "$env:TEMP\gotest.log"
if ($LASTEXITCODE -ne 0) {
Write-Host "::error::Go tests failed with exit code $LASTEXITCODE"
exit $LASTEXITCODE
}
- name: Upload test logs
if: always()
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: windows-go-test-logs
path: ${{ runner.temp }}\gotest.log
- name: Generate summary of errors
if: failure()
run: |
$logContent = Get-Content "$env:TEMP\gotest.log" -Raw -ErrorAction SilentlyContinue
if ($logContent) {
$failures = ($logContent -split "`n" | Select-String -Pattern "^--- FAIL:").Line
$panics = ($logContent -split "`n" | Select-String -Pattern "^panic:").Line
$failPkgs = ($logContent -split "`n" | Select-String -Pattern "^FAIL\t").Line
Write-Host "=== Test Failures ==="
if ($failures) { $failures | ForEach-Object { Write-Host $_ } }
if ($panics) { $panics | ForEach-Object { Write-Host $_ } }
if ($failPkgs) { $failPkgs | ForEach-Object { Write-Host $_ } }
}
- name: Slack Notification
if: github.event_name == 'schedule' && failure()
uses: slackapi/slack-github-action@e28cf165c92ffef168d23c5c9000cffc8a25e117 # v1.24.0
with:
payload: |
{
"text": "Windows Go tests failed",
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":x: *Windows Go tests failed*\n<https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}|View run>"
}
}
]
}
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_G_HELP_ENGINEERING_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK

View file

@ -87,7 +87,7 @@ jobs:
strategy:
matrix:
suite: ["integration-core", "integration-enterprise", "integration-mdm", "fleetctl", "main", "mysql", "service", "vuln"]
mysql: ["mysql:8.0.39", "mysql:8.4.8"]
mysql: ["mysql:8.0.42", "mysql:8.4.8"]
uses: ./.github/workflows/test-go-suite.yaml
with:
suite: ${{ matrix.suite }}
@ -124,7 +124,7 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install Go
uses: actions/setup-go@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32 # v5.0.2
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'

View file

@ -40,7 +40,7 @@ jobs:
fetch-depth: 0
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'

View file

@ -63,7 +63,7 @@ jobs:
- name: Install Go
if: ${{ matrix.build_type == 'local' }}
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'

View file

@ -53,7 +53,7 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: "go.mod"

View file

@ -15,26 +15,26 @@ on:
- patch-*
- prepare-*
paths:
- 'cmd/fleetctl/**.go'
- 'pkg/**.go'
- 'server/context/**.go'
- 'orbit/**.go'
- 'ee/fleetctl/**.go'
- 'tools/fleetctl-docker/**'
- 'tools/wix-docker/**'
- 'tools/bomutils-docker/**'
- '.github/workflows/test-packaging.yml'
- "cmd/fleetctl/**.go"
- "pkg/**.go"
- "server/context/**.go"
- "orbit/**.go"
- "ee/fleetctl/**.go"
- "tools/fleetctl-docker/**"
- "tools/wix-docker/**"
- "tools/bomutils-docker/**"
- ".github/workflows/test-packaging.yml"
pull_request:
paths:
- 'cmd/fleetctl/**.go'
- 'pkg/**.go'
- 'server/context/**.go'
- 'orbit/**.go'
- 'ee/fleetctl/**.go'
- 'tools/fleetctl-docker/**'
- 'tools/wix-docker/**'
- 'tools/bomutils-docker/**'
- '.github/workflows/test-packaging.yml'
- "cmd/fleetctl/**.go"
- "pkg/**.go"
- "server/context/**.go"
- "orbit/**.go"
- "ee/fleetctl/**.go"
- "tools/fleetctl-docker/**"
- "tools/wix-docker/**"
- "tools/bomutils-docker/**"
- ".github/workflows/test-packaging.yml"
workflow_dispatch: # Manual
# This allows a subsequently queued workflow run to interrupt previous runs
@ -55,7 +55,7 @@ jobs:
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-15]
os: [ubuntu-latest, macos-15, macos-26]
runs-on: ${{ matrix.os }}
steps:
@ -80,14 +80,14 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: "go.mod"
- name: Install wine and wix
if: startsWith(matrix.os, 'macos')
run: |
./it-and-security/lib/macos/scripts/install-wine.sh -n
./assets/scripts/install-wine.sh -n
wget https://github.com/wixtoolset/wix3/releases/download/wix3112rtm/wix311-binaries.zip -nv -O wix.zip
mkdir wix
unzip wix.zip -d wix

View file

@ -45,7 +45,7 @@ jobs:
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Install Go
uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: 'go.mod'

1
.gitignore vendored
View file

@ -53,6 +53,7 @@ backup.sql.gz
# Common mistake for new developers to run npm install and then end up
# committing a package-lock.json. Fleet app uses Yarn with yarn.lock.
package-lock.json
!website/package-lock.json
# infra
.terraform

View file

@ -10,9 +10,35 @@ issues:
linters:
default: none
enable:
- gosec
- modernize
- testifylint
- nilaway
- setboolcheck
- depguard
settings:
gosec:
# Only enable rules that are too noisy on existing code but valuable for new code.
# Existing violations were audited during the v2.7.1 -> v2.11.3 upgrade and found
# to be false positives or safe patterns, but we want to catch real issues going forward.
includes:
- G101 # Potential hardcoded credentials.
- G115 # Integer overflow conversion.
- G117 # Marshaled struct field matches secret pattern.
- G118 # Goroutine uses context.Background/TODO while request-scoped context is available.
- G122 # Filesystem race in filepath.Walk/WalkDir callback.
- G202 # SQL string concatenation.
- G602 # Slice index out of range.
- G704 # SSRF via taint analysis.
- G705 # XSS via taint analysis.
- G706 # Log injection via taint analysis.
depguard:
rules:
no-old-rand:
list-mode: lax
deny:
- pkg: math/rand$
desc: Use math/rand/v2 instead
custom:
nilaway:
type: module
@ -21,6 +47,9 @@ linters:
# Settings must be a "map from string to string" to mimic command line flags: the keys are
# flag names and the values are the values to the particular flags.
include-pkgs: "github.com/fleetdm/fleet/v4"
setboolcheck:
type: module
description: Flags map[T]bool used as sets; suggests map[T]struct{} instead.
exclusions:
generated: strict
rules:

View file

@ -177,7 +177,22 @@ linters:
- G104 # Errors unhandled. We are using errcheck linter instead of this rule.
- G204 # Subprocess launched with variable. Some consider this rule to be too noisy.
- G301 # Directory permissions 0750 as opposed to standard 0755. Consider enabling stricter permission in the future.
- G304 # File path provided as taint input
- G304 # File path provided as taint input.
- G702 # Command injection via taint analysis (taint version of excluded G204).
- G703 # Path traversal via taint analysis (taint version of excluded G304).
# The following rules are excluded from the full lint but enabled in the incremental
# linter (.golangci-incremental.yml) so they only apply to new/changed code.
# Existing violations were audited during the v2.7.1 -> v2.11.3 upgrade.
- G101 # Potential hardcoded credentials.
- G115 # Integer overflow conversion.
- G117 # Marshaled struct field matches secret pattern.
- G118 # Goroutine uses context.Background/TODO while request-scoped context is available.
- G122 # Filesystem race in filepath.Walk/WalkDir callback.
- G202 # SQL string concatenation.
- G602 # Slice index out of range.
- G704 # SSRF via taint analysis.
- G705 # XSS via taint analysis.
- G706 # Log injection via taint analysis.
config:
G306: "0644"

View file

@ -66,4 +66,4 @@ dockers:
- fleetctl
dockerfile: tools/fleet-docker/Dockerfile
image_templates:
- "fleetdm/fleet:{{ .ShortCommit }}"
- 'fleetdm/fleet:{{ envOrDefault "DOCKER_IMAGE_TAG" .Branch }}'

View file

@ -0,0 +1,92 @@
---
name: cherry-pick
description: Cherry-pick a merged PR onto a release candidate branch and open a new PR. Use when asked to cherry-pick, backport, or port a PR to an rc-minor or rc-patch branch.
---
# Cherry-pick kilocode skill
## Important: single session only
**Use only a single agent session for the entire cherry-pick.** Multiple sessions for the same cherry-pick have caused duplicate PRs in the past.
## Arguments
This skill expects two arguments:
1. **Target branch** — the release candidate branch (e.g., `rc-minor-fleet-v4.83.0`, `rc-patch-fleet-v4.82.1`)
2. **Source PR** — a GitHub PR URL or number from the `fleetdm/fleet` repo
## Steps
1. **Fetch the latest remote state**
```bash
git fetch origin
```
2. **Identify the merge commit** — find the merge commit SHA for the source PR on `main`.
```bash
gh pr view <PR> --json mergeCommit --jq '.mergeCommit.oid'
```
3. **Create a working branch** from the target release branch:
```bash
git checkout -b cherry-pick-<PR_NUMBER>-to-<target-branch> origin/<target-branch>
```
4. **Cherry-pick the merge commit** using `-m 1` (mainline parent):
```bash
git cherry-pick -m 1 <merge-commit-sha>
```
- If there are conflicts, resolve them and continue the cherry-pick.
- If the PR was a squash-merge (single commit, no merge commit), omit `-m 1`.
5. **Push and open a PR** against the target branch:
```bash
git push -u origin HEAD
gh pr create \
--base <target-branch> \
--title "Cherry-pick #<PR_NUMBER> onto <target-branch>" \
--body "Cherry-pick of https://github.com/fleetdm/fleet/pull/<PR_NUMBER> onto the <target-branch> release branch."
```
## Commit message format
Follow the established pattern:
```
Cherry-pick #<PR_NUMBER> onto <target-branch>
Cherry-pick of https://github.com/fleetdm/fleet/pull/<PR_NUMBER> onto the
<target-branch> release branch.
```
If the original commit has a `Co-authored-by` trailer, preserve it.
## Branch naming
```
cherry-pick-<PR_NUMBER>-to-<target-branch>
```
Example: `cherry-pick-41914-to-rc-minor-fleet-v4.83.0`
## Common issues
- **Duplicate PRs** — never run multiple agent sessions for the same cherry-pick.
- **Conflict on cherry-pick** — resolve conflicts manually, then `git cherry-pick --continue`.
- **Migration timestamp ordering** — if the cherry-picked PR includes migrations, verify timestamps are in chronological order on the target branch.
## References
- Release process: https://github.com/fleetdm/fleet/blob/main/docs/Contributing/workflows/releasing-fleet.md
- Backport checker: `tools/release/backport-check.sh`
---
*This file will grow as new patterns and constraints are established.*

View file

@ -0,0 +1,52 @@
---
name: fleet-gitops
description: Use when working on Fleet GitOps configuration files, including osquery queries, configuration profiles, DDM declarations, software management, and CVE remediation in the it-and-security folder.
---
# Fleet GitOps kilocode skill
## Queries & Reports
- Only use **Fleet tables and supported columns** when writing osquery queries or Fleet reports.
- Do not reference tables or columns that are not present in the Fleet schema for the target platform.
- Validate tables and column names against the Fleet schema before including them in a query:
- https://github.com/fleetdm/fleet/tree/main/schema
## Configuration Profiles
When generating or modifying configuration profiles:
- **First-party Apple payloads** (`.mobileconfig`) — validate payload keys, types, and allowed values against the Apple Device Management reference:
- https://github.com/apple/device-management/tree/release/mdm/profiles
- **Third-party Apple payloads** (`.mobileconfig`) — validate against the ProfileManifests community reference:
- https://github.com/ProfileManifests/ProfileManifests
- **Windows CSPs** (`.xml`) — validate CSP paths, formats, and allowed values against Microsoft's MDM protocol reference:
- https://learn.microsoft.com/en-us/windows/client-management/mdm/
- **Android profiles** (`.json`) — validate keys and values against the Android Management API `enterprises.policies` reference:
- https://developers.google.com/android/management/reference/rest/v1/enterprises.policies
## Software
- When adding software for macOS or Windows hosts, **always check the Fleet-maintained app catalog first** before using a custom package:
- https://github.com/fleetdm/fleet/tree/main/ee/maintained-apps
- In GitOps YAML, use the `fleet_maintained_apps` key with the app's `slug` to reference a Fleet-maintained app.
- When remediating a CVE, use Fleet's built-in vulnerability detection to identify affected software, then follow the Software section above to deploy a fix — preferring a Fleet-maintained app update where available, otherwise a custom package.
## Declarative Device Management (DDM)
When generating or modifying DDM declarations:
- Validate declaration types, keys, and values against the Apple DDM reference:
- https://github.com/apple/device-management/tree/release/declarative/declarations
- Ensure the `Type` identifier matches a supported declaration type from the reference.
---
## References
- Fleet GitOps documentation: https://fleetdm.com/docs/configuration/yaml-files
- Fleet API documentation: https://fleetdm.com/docs/rest-api/rest-api
---
*This file will grow as new patterns and constraints are established.*

12
.pr_agent.toml Normal file
View file

@ -0,0 +1,12 @@
# Configuration for Qodo code review tool
[github_app]
# Do not auto-run anything when a PR is opened
pr_commands = []
# Do not auto-run anything on new commits / pushes
handle_push_trigger = false
# Keep the review tool available for manual comment commands
[review_agent]
enabled = true
publish_output = true

View file

@ -8,7 +8,7 @@ repos:
hooks:
- id: gitleaks
- repo: https://github.com/golangci/golangci-lint
rev: v2.7.1
rev: v2.11.3
hooks:
- id: golangci-lint
- repo: https://github.com/jumanjihouse/pre-commit-hooks

Some files were not shown because too many files have changed in this diff Show more