mirror of
https://github.com/fleetdm/fleet
synced 2026-04-21 13:37:30 +00:00
Initial .claude files checkin (#40451)
<!-- Add the related story/sub-task/bug number, like Resolves #123, or remove if NA --> **Related issue:** Resolves #40450 ## Details This PR checks in a `.claude` folder with a main `CLAUDE.md` file, hooks, commands, agents and settings useful for working with Fleet. Claude generated these itself based on some of the work I was doing with it: * `CLAUDE.md`: contains basic information about the repo and project to give Claude needed context before working on Fleet code * `commands/project.md`: allows you to maintain memory across multiple related Claude sessions. For example I use `/project renaming` whenever I'm working on the project to rename "teams" to "fleets", so that I don't have to explain every time what it is we're trying to accomplish. It keeps track of goals, what we've done, what's left, etc. * `commands/fix-ci.md`: given a GitHub action run URL, it will find any failing tests, fix the broken ones and report on any that look legitimate. Example: `/fix-ci https://github.com/fleetdm/fleet/actions/runs/22364613741/job/64727183666?pr=40414` * Other commands: `test.md`, `fix-related-tests.md`, `test.md`, `review-pr.md` -- I haven't used these, leaving them in for discussion. The `review-pr` one is interesting as it should utilize the `agents/go-reviewer.md` agent which we can customize to do things like look at our patterns files. * Settings + goimports hook: whenever Claude makes edits or creates files, run the formatter
This commit is contained in:
parent
f555071a76
commit
6fc6e58d14
9 changed files with 258 additions and 0 deletions
12
.claude/CLAUDE.md
Normal file
12
.claude/CLAUDE.md
Normal file
|
|
@ -0,0 +1,12 @@
|
|||
## Running Tests
|
||||
|
||||
```bash
|
||||
# Quick Go tests (no external deps)
|
||||
go test ./server/fleet/...
|
||||
|
||||
# Integration tests (need MySQL and/or Redis running)
|
||||
MYSQL_TEST=1 go test ./server/datastore/mysql/...
|
||||
MYSQL_TEST=1 REDIS_TEST=1 go test ./server/service/...
|
||||
|
||||
# Run a specific test
|
||||
MYSQL_TEST=1 go test -run TestFunctionName ./server/datastore/mysql/...
|
||||
41
.claude/agents/go-reviewer.md
Normal file
41
.claude/agents/go-reviewer.md
Normal file
|
|
@ -0,0 +1,41 @@
|
|||
# Go Code Reviewer for Fleet
|
||||
|
||||
You are a Go code reviewer specialized in the Fleet codebase. Review code changes with deep knowledge of Fleet's patterns and conventions.
|
||||
|
||||
## What you check
|
||||
|
||||
### Error handling
|
||||
- Errors wrapped with `ctxerr.Wrap(ctx, err, "message")` not `fmt.Errorf` or `pkg/errors`
|
||||
- All errors from DB calls checked
|
||||
- Proper error propagation (no swallowed errors)
|
||||
|
||||
### Database
|
||||
- SQL injection prevention (parameterized queries only)
|
||||
- Proper use of sqlx/goqu patterns
|
||||
- New queries have appropriate indexes
|
||||
- Migrations are reversible and tested
|
||||
- `ds.writer(ctx)` vs `ds.reader(ctx)` used correctly for write/read operations
|
||||
|
||||
### API endpoints
|
||||
- Auth checks present (middleware or explicit)
|
||||
- Input validation at boundaries
|
||||
- Proper HTTP status codes
|
||||
- Response types match Fleet conventions
|
||||
|
||||
### Testing
|
||||
- New code has corresponding tests
|
||||
- Integration tests for DB-touching code
|
||||
- Test helpers used correctly (CreateMySQLDS, etc.)
|
||||
- Edge cases covered (nil, empty, large inputs)
|
||||
|
||||
### Logging
|
||||
- Uses slog or level.X(logger) structured logging
|
||||
- No print/println statements
|
||||
- Sensitive data not logged
|
||||
|
||||
## Output format
|
||||
|
||||
Organize findings by severity:
|
||||
1. **Blocking** — must fix before merge
|
||||
2. **Important** — should fix, may cause issues
|
||||
3. **Minor** — style/convention nits
|
||||
8
.claude/commands/find-related-tests.md
Normal file
8
.claude/commands/find-related-tests.md
Normal file
|
|
@ -0,0 +1,8 @@
|
|||
Look at my recent git changes (`git diff` and `git diff --cached`) and find all related test files.
|
||||
|
||||
For each modified file, find:
|
||||
1. The `_test.go` file in the same package
|
||||
2. Integration tests that exercise the modified code (check `server/service/integration_*_test.go` files)
|
||||
3. Any test helpers or fixtures that may need updating
|
||||
|
||||
List the test files and suggest specific test functions to run with the exact `go test` commands, including the right env vars (MYSQL_TEST, REDIS_TEST, etc.).
|
||||
83
.claude/commands/fix-ci.md
Normal file
83
.claude/commands/fix-ci.md
Normal file
|
|
@ -0,0 +1,83 @@
|
|||
Fix failing tests from a CI run. The argument is a GitHub Actions run URL or run ID: $ARGUMENTS
|
||||
|
||||
## Step 1: Identify failing jobs
|
||||
|
||||
Extract the run ID from the URL (the numeric path segment after `/runs/`). Use `gh run view <run_id>` to list the jobs, then find the failing ones:
|
||||
|
||||
```
|
||||
gh run view <run_id> --json jobs --jq '.jobs[] | select(.conclusion == "failure") | {name: .name, id: .databaseId}'
|
||||
```
|
||||
|
||||
Group the failing jobs by **test suite** (the first parenthesized token in the job name, e.g. `integration-core`, `integration-enterprise`, `service`, `mysql`, `main`). You only need to examine **one job per unique suite** since the matrix variants (OS, MySQL version) run the same tests.
|
||||
|
||||
## Step 2: Find the failing tests in each suite
|
||||
|
||||
For each unique suite, fetch the job log and find the `FAIL: ` lines. IMPORTANT: use `gh api` (not `gh run view --log`, which may return empty):
|
||||
|
||||
```
|
||||
gh api repos/fleetdm/fleet/actions/jobs/<job_id>/logs 2>&1 | grep -e 'FAIL: ' | head -30
|
||||
```
|
||||
|
||||
This gives you the failing test function names and subtests. Ignore the parent test if subtests are listed (e.g. if `TestFoo` and `TestFoo/Bar` both appear, focus on `TestFoo/Bar`).
|
||||
|
||||
## Step 3: Get error details
|
||||
|
||||
For each suite, fetch the error traces:
|
||||
|
||||
```
|
||||
gh api repos/fleetdm/fleet/actions/jobs/<job_id>/logs 2>&1 | grep -e 'FAIL: \|Error Trace\|Error:\|expected:\|actual:' | head -60
|
||||
```
|
||||
|
||||
This tells you the exact file/line and what the assertion expected vs. what it got.
|
||||
|
||||
## Step 4: Diagnose each failure
|
||||
|
||||
For each failing test, read the test code at the indicated file and line. Determine whether the failure is:
|
||||
|
||||
**A) A stale test assertion** — the test expects an old string/value but the production code was intentionally changed. The test needs updating to match the new behavior. Signs:
|
||||
- The expected value is an old error message string and the actual value is a new one
|
||||
- The change aligns with the intent of the current branch's modifications
|
||||
- The production code change looks intentional
|
||||
|
||||
**B) A legitimate test failure** — the test is correct but the code under test is buggy. The production code needs fixing. Signs:
|
||||
- The test's expected value matches the documented/intended behavior
|
||||
- The actual value indicates a regression or bug
|
||||
- The test was not related to any intentional change on this branch
|
||||
|
||||
## Step 5: Fix stale assertions (category A)
|
||||
|
||||
For each stale assertion:
|
||||
1. Read the test file
|
||||
2. Update the assertion to match the new expected value
|
||||
3. Also search for **other assertions in the same file** that check similar strings — CI only catches the first failure per test, so there may be additional stale assertions that haven't failed yet. Use Grep to find them.
|
||||
4. Also check for **related assertions in other test files** for the same error message pattern
|
||||
|
||||
## Step 6: Report legitimate failures (category B)
|
||||
|
||||
For each legitimate failure, report to the user:
|
||||
- The test name and file location
|
||||
- What the test expects vs. what it got
|
||||
- Your analysis of why the production code is producing the wrong result
|
||||
- The production code file/line that likely needs fixing
|
||||
|
||||
Do NOT fix production code bugs without user approval — only report them.
|
||||
|
||||
## Step 7: Verify fixes
|
||||
|
||||
After fixing stale assertions, run the affected tests locally to verify they pass:
|
||||
|
||||
- `pkg/spec/...` and `server/fleet/...`: `go test -run 'TestName' ./pkg/spec/...`
|
||||
- `server/service/...` (unit tests like devices_test.go, scripts_test.go): `go test -run 'TestName' ./server/service/`
|
||||
- `ee/server/service/...`: `go test -run 'TestName' ./ee/server/service/`
|
||||
- `server/datastore/mysql/...`: `MYSQL_TEST=1 go test -run 'TestName' ./server/datastore/mysql/`
|
||||
- Integration tests (`integration_core_test.go`, `integration_enterprise_test.go`, `integration_live_queries_test.go`): these require `MYSQL_TEST=1 REDIS_TEST=1` and take a long time, so just verify compilation with `go build ./...`
|
||||
|
||||
After running tests, also do a proactive Grep scan for any remaining old assertion strings in test files that might break in CI even though they didn't show up in this run (CI stops at the first failure per test function).
|
||||
|
||||
## Step 8: Report summary
|
||||
|
||||
Present a summary to the user:
|
||||
- Total failing suites and tests found
|
||||
- How many were stale assertions (fixed) vs. legitimate failures (reported)
|
||||
- List of files modified
|
||||
- Any remaining concerns or tests that couldn't be verified locally
|
||||
38
.claude/commands/project.md
Normal file
38
.claude/commands/project.md
Normal file
|
|
@ -0,0 +1,38 @@
|
|||
Read the project context file at `~/.fleet/claude-projects/$ARGUMENTS.md`. This contains background, decisions, and conventions for a specific workstream within Fleet.
|
||||
|
||||
Also check for a project-specific memory file named `$ARGUMENTS.md` in your auto memory directory (the persistent memory directory mentioned in your system instructions). If it exists, read it too — it contains things learned while working on this project in previous sessions.
|
||||
|
||||
If the project context file was found, give a brief summary of what you know and ask what we're working on today.
|
||||
|
||||
If the project context file doesn't exist:
|
||||
1. Tell the user no project named "$ARGUMENTS" was found.
|
||||
2. List any existing `.md` files in `~/.fleet/claude-projects/` so they can see what's available.
|
||||
3. Ask if they'd like to initialize a new project with that name.
|
||||
4. If they don't want to initialize, stop here.
|
||||
5. If they do, ask them to brain-dump everything they know about the workstream — the goal, what areas of the codebase it touches, key decisions, gotchas, anything they've been repeating at the start of each session. A sentence is fine, a paragraph is better. Also offer: "I can also scan your recent session transcripts for relevant context — would you like me to look back through recent chats?"
|
||||
6. If they want you to scan prior sessions, look at the JSONL transcript files in the Claude project directory (the same directory as your auto memory, but the `.jsonl` files). Read recent ones (last 5-10), skimming for messages related to the workstream. These are large files, so read selectively — check the first few hundred lines of each to gauge relevance before reading more deeply.
|
||||
7. Using their description, any prior session context, and codebase exploration, find relevant files, patterns, types, and existing implementations related to the workstream.
|
||||
8. Create `~/.fleet/claude-projects/$ARGUMENTS.md` populated with what you found, using this structure:
|
||||
|
||||
```markdown
|
||||
# Project: $ARGUMENTS
|
||||
|
||||
## Background
|
||||
<!-- What is this workstream about, in the user's words + what you learned -->
|
||||
|
||||
## How It Works
|
||||
<!-- Key mechanisms, patterns, and code flow you discovered -->
|
||||
|
||||
## Key Files
|
||||
<!-- Important file paths for this workstream, with brief descriptions -->
|
||||
|
||||
## Key Decisions
|
||||
<!-- Important architectural or design decisions -->
|
||||
|
||||
## Status
|
||||
<!-- What's done, what remains -->
|
||||
```
|
||||
|
||||
9. Show the user what you wrote and ask if they'd like to adjust anything before continuing.
|
||||
|
||||
As you work on a project, update the memory file (in your auto memory directory, named `$ARGUMENTS.md`) with useful discoveries — gotchas, important file paths, patterns — but not session-specific details.
|
||||
17
.claude/commands/review-pr.md
Normal file
17
.claude/commands/review-pr.md
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
Review the pull request: $ARGUMENTS
|
||||
|
||||
Use `gh pr view` and `gh pr diff` to get the full context.
|
||||
|
||||
Review the changes focusing on:
|
||||
1. **Correctness** — logic errors, edge cases, nil pointer risks
|
||||
2. **Go idioms** — error handling with ctxerr, proper context usage, slog logging
|
||||
3. **SQL safety** — injection risks, missing indexes for new queries, migration correctness
|
||||
4. **Test coverage** — are new code paths tested? Are integration tests needed?
|
||||
5. **Fleet conventions** — matches patterns in surrounding code
|
||||
|
||||
For each issue found, cite the specific file and line. Categorize findings as:
|
||||
- **Must fix** — bugs, security issues, data loss risks
|
||||
- **Should fix** — convention violations, missing error handling
|
||||
- **Nit** — style preferences, minor improvements
|
||||
|
||||
Be concise. Don't comment on things that are fine.
|
||||
10
.claude/commands/test.md
Normal file
10
.claude/commands/test.md
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
Run Go tests related to my recent changes. Look at `git diff` and `git diff --cached` to determine which packages were modified.
|
||||
|
||||
For each modified package, run the tests with appropriate env vars:
|
||||
- If the package is under `server/datastore/mysql`: use `MYSQL_TEST=1`
|
||||
- If the package is under `server/service`: use `MYSQL_TEST=1 REDIS_TEST=1`
|
||||
- Otherwise: run without special env vars
|
||||
|
||||
If an argument is provided, use it as a `-run` filter: $ARGUMENTS
|
||||
|
||||
Show a summary of results: which packages passed, which failed, and any failure details.
|
||||
24
.claude/hooks/goimports.sh
Executable file
24
.claude/hooks/goimports.sh
Executable file
|
|
@ -0,0 +1,24 @@
|
|||
#!/bin/sh
|
||||
# PostToolUse hook: run goimports on Go files after Edit/Write
|
||||
# Receives tool event JSON on stdin
|
||||
|
||||
INPUT=$(cat)
|
||||
FILE_PATH=$(echo "$INPUT" | jq -r '.tool_input.file_path // empty')
|
||||
|
||||
if [ -z "$FILE_PATH" ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
case "$FILE_PATH" in
|
||||
*.go)
|
||||
if command -v goimports >/dev/null 2>&1; then
|
||||
goimports -w "$FILE_PATH" 2>/dev/null
|
||||
elif command -v gofumpt >/dev/null 2>&1; then
|
||||
gofumpt -w "$FILE_PATH" 2>/dev/null
|
||||
else
|
||||
gofmt -w "$FILE_PATH" 2>/dev/null
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
|
||||
exit 0
|
||||
25
.claude/settings.json
Normal file
25
.claude/settings.json
Normal file
|
|
@ -0,0 +1,25 @@
|
|||
{
|
||||
"env": {
|
||||
"MYSQL_TEST": "1",
|
||||
"REDIS_TEST": "1"
|
||||
},
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"Read(~/.fleet/**)"
|
||||
]
|
||||
},
|
||||
"hooks": {
|
||||
"PostToolUse": [
|
||||
{
|
||||
"matcher": "Edit|Write",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "\"$CLAUDE_PROJECT_DIR\"/.claude/hooks/goimports.sh",
|
||||
"timeout": 10
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
Loading…
Reference in a new issue