Release v0.4.0: /consistency-check, skill fixes, genre-agnostic agents

New skill: /consistency-check — cross-GDD entity registry scanner
New registries: design/registry/entities.yaml, docs/registry/architecture.yaml
Skill fixes: no-arg guards, verdict keywords, AskUserQuestion gates on all team-* skills
Agent fixes: genre-agnostic language in game-designer, systems-designer, economy-designer, live-ops-designer
Docs: skill/template counts corrected, stale references cleaned up

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
Donchitos 2026-03-27 20:06:33 +11:00
parent 04ed5d5c36
commit 6c041ac1be
108 changed files with 2745 additions and 1005 deletions

View file

@ -23,7 +23,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -25,7 +25,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -25,7 +25,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"
@ -84,7 +84,7 @@ Before writing any code:
Examples:
- `game.level.started`
- `game.level.completed`
- `game.combat.enemy_killed`
- `game.[context].[action]`
- `ui.menu.settings_opened`
- `economy.currency.spent`
- `progression.milestone.reached`

View file

@ -28,7 +28,7 @@ Before proposing any design:
2. **Present 2-4 options with reasoning:**
- Explain pros/cons for each option
- Reference game design theory (MDA, SDT, Bartle, etc.)
- Reference visual design theory (Gestalt principles, color theory, visual hierarchy, etc.)
- Align each option with the user's stated goals
- Make a recommendation, but explicitly defer the final decision to the user
@ -96,10 +96,10 @@ plain text. Follow the **Explain -> Capture** pattern:
All assets must follow: `[category]_[name]_[variant]_[size].[ext]`
Examples:
- `env_tree_oak_large.png`
- `char_knight_idle_01.png`
- `env_[object]_[descriptor]_large.png`
- `char_[character]_idle_01.png`
- `ui_btn_primary_hover.png`
- `vfx_fire_loop_small.png`
- `vfx_[effect]_loop_small.png`
### What This Agent Must NOT Do

View file

@ -23,7 +23,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -25,7 +25,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -28,7 +28,7 @@ Before proposing any design:
2. **Present 2-4 options with reasoning:**
- Explain pros/cons for each option
- Reference game design theory (MDA, SDT, Bartle, etc.)
- Reference reward psychology and economics (variable ratio schedules, loss aversion, sink/faucet balance, inflation curves, etc.)
- Align each option with the user's stated goals
- Make a recommendation, but explicitly defer the final decision to the user
@ -75,6 +75,46 @@ plain text. Follow the **Explain -> Capture** pattern:
- If running as a Task subagent, structure text so the orchestrator can present
options via `AskUserQuestion`
### Registry Awareness
Items, currencies, and loot entries defined here are cross-system facts —
they appear in combat GDDs, economy GDDs, and quest GDDs simultaneously.
Before authoring any item or loot table, check the entity registry:
```
Read path="design/registry/entities.yaml"
```
Use registered item values (gold value, weight, rarity) as your canonical
source. Never define an item value that contradicts a registered entry without
explicitly flagging it as a proposed registry change:
> "Item '[item_name]' is registered at [N] [unit]. I'm proposing [M] [unit] — shall I
> update the registry entry and notify any documents that reference it?"
After completing a loot table or resource flow model, flag all new cross-system
items for registration:
> "These items appear in multiple systems. May I add them to
> `design/registry/entities.yaml`?"
### Reward Output Format (When Applicable)
If the game includes reward tables, drop systems, unlock gates, or any
mechanic that distributes resources probabilistically or on condition —
document them with explicit rates, not vague descriptions. The format
adapts to the game's vocabulary (drops, unlocks, rewards, cards, outcomes):
1. **Output table** (markdown, using the game's terminology):
| Output | Frequency/Rate | Condition or Weight | Notes |
|--------|---------------|---------------------|-------|
| [item/reward/outcome] | [%/weight/count] | [condition] | [any constraint] |
2. **Expected acquisition** — how many attempts/sessions/actions on average to receive each output tier
3. **Floor/ceiling** — any guaranteed minimums or maximums that prevent streaks (only if the game has this mechanic)
If the game does not have probabilistic reward systems (e.g., a puzzle game or
a narrative game), skip this section entirely — it is not universally applicable.
### Key Responsibilities
1. **Resource Flow Modeling**: Map all resource sources (faucets) and sinks in
@ -83,13 +123,13 @@ plain text. Follow the **Explain -> Capture** pattern:
2. **Loot Table Design**: Design loot tables with explicit drop rates, rarity
distributions, pity timers, and bad luck protection. Document expected
acquisition timelines for every item tier.
3. **Progression Curve Design**: Define XP curves, power curves, and unlock
3. **Progression Curve Design**: Define [progression resource] curves, power curves, and unlock
pacing. Model expected player power at each stage of the game.
4. **Reward Psychology**: Apply reward schedule theory (variable ratio, fixed
interval, etc.) to design satisfying reward patterns. Document the
psychological principle behind each reward structure.
5. **Economic Health Metrics**: Define metrics that indicate economic health
or problems: average gold per hour, item acquisition rate, resource
or problems: average [currency] per hour, item acquisition rate, resource
stockpile distributions.
### What This Agent Must NOT Do

View file

@ -25,7 +25,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -176,7 +176,7 @@ Mastery (challenge, strategy), Achievement (completion, power), Immersion
Every numeric system exposes exactly three categories of knobs:
1. **Feel knobs**: affect moment-to-moment experience (attack speed, movement
speed, animation timing). These are tuned through playtesting intuition.
2. **Curve knobs**: affect progression shape (XP requirements, damage scaling,
2. **Curve knobs**: affect progression shape ([progression resource] requirements, [stat] scaling,
cost multipliers). These are tuned through mathematical modeling.
3. **Gate knobs**: affect pacing (level requirements, resource thresholds,
cooldown timers). These are tuned through session-length targets.

View file

@ -25,7 +25,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -22,7 +22,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -22,7 +22,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -22,7 +22,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -22,7 +22,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -28,7 +28,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -28,7 +28,7 @@ Before proposing any design:
2. **Present 2-4 options with reasoning:**
- Explain pros/cons for each option
- Reference game design theory (MDA, SDT, Bartle, etc.)
- Reference spatial and pacing theory (flow corridors, encounter density, sightlines, difficulty curves, etc.)
- Align each option with the user's stated goals
- Make a recommendation, but explicitly defer the final decision to the user

View file

@ -101,8 +101,8 @@ plain text. Follow the **Explain → Capture** pattern:
- Free track must provide meaningful progression (never feel punishing)
- Premium track adds cosmetic and convenience rewards
- No gameplay-affecting items exclusively in premium track (pay-to-win)
- XP curve: early levels fast (hook), mid levels steady, final levels require dedication
- Include catch-up mechanics for late joiners (XP boost in final weeks)
- [Progression] curve: early [tiers] fast (hook), mid [tiers] steady, final [tiers] require dedication
- Include catch-up mechanics for late joiners ([progression boost] in final weeks)
- Document reward tables with rarity distribution and perceived value
### Event Design

View file

@ -27,7 +27,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -25,7 +25,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -26,7 +26,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -26,7 +26,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -30,7 +30,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -27,7 +27,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -27,7 +27,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -22,7 +22,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -26,7 +26,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -28,7 +28,7 @@ Before proposing any design:
2. **Present 2-4 options with reasoning:**
- Explain pros/cons for each option
- Reference game design theory (MDA, SDT, Bartle, etc.)
- Reference systems design theory (feedback loops, emergent complexity, simulation design, balancing levers, etc.)
- Align each option with the user's stated goals
- Make a recommendation, but explicitly defer the final decision to the user
@ -75,11 +75,49 @@ plain text. Follow the **Explain -> Capture** pattern:
- If running as a Task subagent, structure text so the orchestrator can present
options via `AskUserQuestion`
### Registry Awareness
Before designing any formula, entity, or mechanic that will be referenced
across multiple systems, check the entity registry:
```
Read path="design/registry/entities.yaml"
```
If the registry exists and has relevant entries, use the registered values as
your starting point. Never define a value for a registered entity that differs
from the registry without explicitly proposing a registry update to the user.
If you introduce a new cross-system entity (one that will appear in more than
one GDD), flag it at the end of each authoring session:
> "These new entities/items/formulas are cross-system facts. May I add them to
> `design/registry/entities.yaml`?"
### Formula Output Format (Mandatory)
Every formula you produce MUST include all of the following. Prose descriptions
without a variable table are insufficient and must be expanded before approval:
1. **Named expression** — a symbolic equation using clearly named variables
2. **Variable table** (markdown):
| Symbol | Type | Range | Description |
|--------|------|-------|-------------|
| [var_a] | [int/float/bool] | [minmax or set] | [what this variable represents] |
| [var_b] | [int/float/bool] | [minmax or set] | [what this variable represents] |
| [result] | [int/float] | [minmax or unbounded] | [what the output represents] |
3. **Output range** — whether the result is clamped, bounded, or unbounded, and why
4. **Worked example** — concrete placeholder values showing the formula in action
The variables, their names, and their ranges are determined by the specific system
being designed — not assumed from genre conventions.
### Key Responsibilities
1. **Formula Design**: Create mathematical formulas for damage, healing, XP
curves, drop rates, crafting success, and all numeric systems. Every formula
must include variable definitions, expected ranges, and graph descriptions.
1. **Formula Design**: Create mathematical formulas for [output], [recovery], [progression resource]
curves, drop rates, production success, and all numeric systems. Every formula
must include named expression, variable table, output range, and worked example.
2. **Interaction Matrices**: For systems with many interacting elements (e.g.,
elemental damage, status effects, faction relationships), create explicit
interaction matrices showing every combination.

View file

@ -25,7 +25,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -25,7 +25,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -23,7 +23,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -22,7 +22,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -22,7 +22,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -22,7 +22,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -25,7 +25,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -22,7 +22,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -22,7 +22,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -22,7 +22,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -22,7 +22,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -22,7 +22,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -22,7 +22,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -28,7 +28,7 @@ Before proposing any design:
2. **Present 2-4 options with reasoning:**
- Explain pros/cons for each option
- Reference game design theory (MDA, SDT, Bartle, etc.)
- Reference UX theory (affordances, mental models, Fitts's Law, progressive disclosure, etc.)
- Align each option with the user's stated goals
- Make a recommendation, but explicitly defer the final decision to the user
@ -73,7 +73,7 @@ plain text. Follow the **Explain -> Capture** pattern:
### Key Responsibilities
1. **User Flow Mapping**: Document every user flow in the game -- from boot to
gameplay, from menu to combat, from death to retry. Identify friction
gameplay, from menu to play, from failure to retry. Identify friction
points and optimize.
2. **Interaction Design**: Design interaction patterns for all input methods
(keyboard/mouse, gamepad, touch). Define button assignments, contextual

View file

@ -27,7 +27,7 @@ Before writing any code:
2. **Ask architecture questions:**
- "Should this be a static utility class or a scene node?"
- "Where should [data] live? (CharacterStats? Equipment class? Config file?)"
- "Where should [data] live? ([SystemData]? [Container] class? Config file?)"
- "The design doc doesn't specify [edge case]. What should happen when...?"
- "This will require changes to [other system]. Should I coordinate with that first?"

View file

@ -109,6 +109,7 @@ Ask yourself: "What department would handle this in a real studio?"
| `/perf-profile` | Performance profiling and bottleneck ID |
| `/tech-debt` | Scan, track, and prioritize tech debt |
| `/gate-check` | Validate phase readiness (PASS/CONCERNS/FAIL) |
| `/consistency-check` | Scan all GDDs for cross-document inconsistencies (conflicting stats, names, rules) |
| `/reverse-document` | Generate design/architecture docs from existing code |
| `/milestone-review` | Reviews milestone progress |
| `/retrospective` | Runs sprint/milestone retrospective |
@ -129,6 +130,18 @@ Ask yourself: "What department would handle this in a real studio?"
| `/team-polish` | Orchestrate full polish team pipeline |
| `/team-audio` | Orchestrate full audio team pipeline |
| `/team-level` | Orchestrate full level creation pipeline |
| `/team-live-ops` | Orchestrate live-ops team for seasons, events, and post-launch content |
| `/team-qa` | Orchestrate full QA team cycle — test plan, test cases, smoke check, sign-off |
| `/qa-plan` | Generate a QA test plan for a sprint or feature |
| `/bug-triage` | Re-prioritize open bugs, assign to sprints, surface systemic trends |
| `/smoke-check` | Run critical path smoke test gate before QA hand-off (PASS/FAIL) |
| `/soak-test` | Generate a soak test protocol for extended play sessions |
| `/regression-suite` | Map coverage to GDD critical paths, flag gaps, maintain regression suite |
| `/test-setup` | Scaffold test framework + CI pipeline for the project's engine (run once) |
| `/test-helpers` | Generate engine-specific test helper libraries and factory functions |
| `/test-flakiness` | Detect flaky tests from CI history, flag for quarantine or fix |
| `/test-evidence-review` | Quality review of test files and manual evidence — ADEQUATE/INCOMPLETE/MISSING |
| `/skill-test` | Validate skill files for compliance and correctness (static / spec / audit) |
### 4. Use Templates for New Documents
@ -167,6 +180,7 @@ Templates are in `.claude/docs/templates/`:
- `interaction-pattern-library.md` -- for standard UI controls and game-specific patterns
- `player-journey.md` -- for 6-phase emotional arc and retention hooks by time scale
- `difficulty-curve.md` -- for difficulty axes, onboarding ramp, and cross-system interactions
- `test-evidence.md` -- template for recording manual test evidence (screenshots, walkthrough notes)
Also in `.claude/docs/templates/collaborative-protocols/` (used by agents, not typically edited directly):
@ -253,7 +267,7 @@ CLAUDE.md -- Master config (read this first, ~60 lines)
.claude/
settings.json -- Claude Code hooks and project settings
agents/ -- 48 agent definitions (YAML frontmatter)
skills/ -- 66 slash command definitions (YAML frontmatter)
skills/ -- 68 slash command definitions (YAML frontmatter)
hooks/ -- 12 hook scripts (.sh) wired by settings.json
rules/ -- 11 path-specific rule files
docs/
@ -266,5 +280,5 @@ CLAUDE.md -- Master config (read this first, ~60 lines)
workflow-catalog.yaml -- 7-phase pipeline definition (read by /help)
setup-requirements.md -- System prerequisites (Git Bash, jq, Python)
settings-local-template.md -- Personal settings.local.json guide
templates/ -- 35 document templates
templates/ -- 37 document templates
```

View file

@ -1,6 +1,6 @@
# Available Skills (Slash Commands)
66 slash commands organized by phase. Type `/` in Claude Code to access any of them.
68 slash commands organized by phase. Type `/` in Claude Code to access any of them.
## Onboarding & Navigation
@ -65,6 +65,7 @@
| `/perf-profile` | Structured performance profiling with bottleneck identification |
| `/tech-debt` | Scan, track, prioritize, and report on technical debt |
| `/gate-check` | Validate readiness to advance between development phases (PASS/CONCERNS/FAIL) |
| `/consistency-check` | Scan all GDDs against the entity registry to detect cross-document inconsistencies (stats, names, rules that contradict each other) |
## QA & Testing

View file

@ -53,6 +53,14 @@ Enter **retrofit mode**:
If NOT in retrofit mode, proceed to Step 0 below (normal ADR authoring).
**No-argument guard**: If no argument was provided (title is empty), ask before
running Phase 0:
> "What technical decision are you documenting? Please provide a short title
> (e.g., `event-system-architecture`, `physics-engine-choice`)."
Use the user's response as the title, then proceed to Step 0.
---
## 0. Load Engine Context (ALWAYS FIRST)
@ -109,6 +117,45 @@ Scan `docs/architecture/` for existing ADRs to find the next number.
Read related code, existing ADRs, and relevant GDDs from `design/gdd/`.
### 2a: Architecture Registry Check (BLOCKING gate)
Read `docs/registry/architecture.yaml`. Extract entries relevant to this ADR's
domain and decision (grep by system name, domain keyword, or state being touched).
Present any relevant stances to the user **before** the collaborative design
begins, as locked constraints:
```
## Existing Architectural Stances (must not contradict)
State Ownership:
player_health → owned by health-system (ADR-0001)
Interface: HealthComponent.current_health (read-only float)
→ If this ADR reads or writes player health, it must use this interface.
Interface Contracts:
damage_delivery → signal pattern (ADR-0003)
Signal: damage_dealt(amount, target, is_crit)
→ If this ADR delivers or receives damage events, it must use this signal.
Forbidden Patterns:
✗ autoload_singleton_coupling (ADR-0001)
✗ direct_cross_system_state_write (ADR-0000)
→ The proposed approach must not use these patterns.
```
If the user's proposed decision would contradict any registered stance, surface
the conflict immediately:
> "⚠️ Conflict: This ADR proposes [X], but ADR-[NNNN] established that [Y] is
> the accepted pattern for this purpose. Proceeding without resolving this will
> produce contradictory ADRs and inconsistent stories.
> Options: (1) Align with the existing stance, (2) Supersede ADR-[NNNN] with
> an explicit replacement, (3) Explain why this case is an exception."
Do not proceed to Step 3 (collaborative design) until any conflict is resolved
or explicitly accepted as an intentional exception.
---
## 3. Guide the decision collaboratively
@ -229,6 +276,12 @@ to implement it.]
- [Things that could go wrong]
- [Mitigation for each risk]
## GDD Requirements Addressed
| GDD System | Requirement | How This ADR Addresses It |
|------------|-------------|--------------------------|
| [system-name].md | [specific rule, formula, or performance constraint from that GDD] | [how this decision satisfies it] |
## Performance Implications
- **CPU**: [Expected impact]
- **Memory**: [Expected impact]
@ -256,4 +309,33 @@ to implement it.]
- If the specialist identifies a **blocking issue** (wrong API, deprecated approach, engine version incompatibility): revise the Decision and Engine Compatibility sections accordingly, then confirm the changes with the user before proceeding
- If the specialist finds **minor notes** only: incorporate them into the ADR's Risks subsection
5. **Save the ADR** to `docs/architecture/adr-[NNNN]-[slug].md`.
5. Ask: "May I write this ADR to `docs/architecture/adr-[NNNN]-[slug].md`?"
If yes, write the file, creating the directory if needed.
6. **Update Architecture Registry**
Scan the written ADR for new architectural stances that should be registered:
- State it claims ownership of
- Interface contracts it defines (signal signatures, method APIs)
- Performance budget it claims
- API choices it makes explicitly
- Patterns it bans (Consequences → Negative or explicit "do not use X")
Present candidates:
```
Registry candidates from this ADR:
NEW state ownership: player_stamina → stamina-system
NEW interface contract: stamina_depleted signal
NEW performance budget: stamina-system: 0.5ms/frame
NEW forbidden pattern: polling stamina each frame (use signal instead)
EXISTING (referenced_by update only): player_health → already registered ✅
```
Ask: "May I update `docs/registry/architecture.yaml` with these [N] new stances?"
If yes: append new entries. Never modify existing entries — if a stance is
changing, set the old entry to `status: superseded_by: ADR-[NNNN]` and add
the new entry.
**Next Steps:** Run `/architecture-review` to validate coverage after the ADR is saved. Update any stories that were `Status: Blocked` pending this ADR to `Status: Ready`.

View file

@ -73,6 +73,11 @@ Read all inputs appropriate to the mode:
Report a count: "Loaded [N] GDDs, [M] ADRs, engine: [name + version]."
**Also read `docs/consistency-failures.md`** if it exists. Extract entries with
Domain matching the systems under review (Architecture, Engine, or any GDD domain
being covered). Surface recurring patterns as a "Known conflict-prone areas" note
at the top of the Phase 4 conflict detection output.
---
## Phase 2: Extract Technical Requirements from Every GDD
@ -530,6 +535,24 @@ If yes:
This ensures all future story files can reference stable TR-IDs that persist
across every subsequent architecture review.
### Reflexion Log Update
After writing the review report, append any 🔴 CONFLICT entries found in Phase 4
to `docs/consistency-failures.md` (if the file exists):
```markdown
### [YYYY-MM-DD] — /architecture-review — 🔴 CONFLICT
**Domain**: Architecture / [specific domain e.g. State Ownership, Performance]
**Documents involved**: [ADR-NNNN] vs [ADR-MMMM]
**What happened**: [specific conflict — what each ADR claims]
**Resolution**: [how it was or should be resolved]
**Pattern**: [generalised lesson for future ADR authors in this domain]
```
Only append CONFLICT entries — do not log GAP entries (missing ADRs are expected
before the architecture is complete). Do not create the file if missing — only
append when it already exists.
### Session State Update
After writing all approved files, silently append to

View file

@ -8,37 +8,43 @@ context: fork
agent: Explore
---
When this skill is invoked:
## Phase 1: Read Standards
1. **Read the art bible or asset standards** from the relevant design docs and
the CLAUDE.md naming conventions.
Read the art bible or asset standards from the relevant design docs and the CLAUDE.md naming conventions.
2. **Scan the target asset directory** using Glob:
- `assets/art/**/*` for art assets
- `assets/audio/**/*` for audio assets
- `assets/vfx/**/*` for VFX assets
- `assets/shaders/**/*` for shaders
- `assets/data/**/*` for data files
---
3. **Check naming conventions**:
- Art: `[category]_[name]_[variant]_[size].[ext]`
- Audio: `[category]_[context]_[name]_[variant].[ext]`
- All files must be lowercase with underscores
## Phase 2: Scan Asset Directories
4. **Check file standards**:
- Textures: Power-of-two dimensions, correct format (PNG for UI, compressed
for 3D), within size budget
- Audio: Correct sample rate, format (OGG for SFX, OGG/MP3 for music),
within duration limits
- Data: Valid JSON/YAML, schema-compliant
Scan the target asset directory using Glob:
5. **Check for orphaned assets** by searching code for references to each
asset file.
- `assets/art/**/*` for art assets
- `assets/audio/**/*` for audio assets
- `assets/vfx/**/*` for VFX assets
- `assets/shaders/**/*` for shaders
- `assets/data/**/*` for data files
6. **Check for missing assets** by searching code for asset references and
verifying the files exist.
---
7. **Output the audit**:
## Phase 3: Run Compliance Checks
**Naming conventions:**
- Art: `[category]_[name]_[variant]_[size].[ext]`
- Audio: `[category]_[context]_[name]_[variant].[ext]`
- All files must be lowercase with underscores
**File standards:**
- Textures: Power-of-two dimensions, correct format (PNG for UI, compressed for 3D), within size budget
- Audio: Correct sample rate, format (OGG for SFX, OGG/MP3 for music), within duration limits
- Data: Valid JSON/YAML, schema-compliant
**Orphaned assets:** Search code for references to each asset file. Flag any with no references.
**Missing assets:** Search code for asset references and verify the files exist.
---
## Phase 4: Output Audit Report
```markdown
# Asset Audit Report -- [Category] -- [Date]
@ -74,4 +80,16 @@ When this skill is invoked:
## Recommendations
[Prioritized list of fixes]
## Verdict: [COMPLIANT / WARNINGS / NON-COMPLIANT]
```
This skill is read-only — it produces a report but does not write files.
---
## Phase 5: Next Steps
- Fix naming violations using the patterns defined in CLAUDE.md.
- Delete confirmed orphaned assets after manual review.
- Run `/content-audit` to cross-check asset counts against GDD-specified requirements.

View file

@ -193,7 +193,9 @@ Ground the concept in reality:
brainstorm conversation, including the MDA analysis, player motivation
profile, and flow state design sections.
5. **Save to** `design/gdd/game-concept.md`, creating directories as needed.
5. Ask: "May I write the game concept document to `design/gdd/game-concept.md`?"
If yes, generate the document using the template at `.claude/docs/templates/game-concept.md`, fill in ALL sections from the brainstorm conversation, and write the file, creating directories as needed.
6. **Suggest next steps** (in this order — this is the professional studio
pre-production pipeline):
@ -208,3 +210,5 @@ Ground the concept in reality:
7. **Output a summary** with the chosen concept's elevator pitch, pillars,
primary player type, engine recommendation, biggest risk, and file path.
Verdict: **COMPLETE** — game concept created and handed off for next steps.

View file

@ -1,19 +1,29 @@
---
name: bug-report
description: "Creates a structured bug report from a description, or analyzes code to identify potential bugs. Ensures every bug report has full reproduction steps, severity assessment, and context."
argument-hint: "[description]
/bug-report analyze [path-to-file]"
argument-hint: "[description] | analyze [path-to-file]"
user-invocable: true
allowed-tools: Read, Glob, Grep, Write
---
When invoked with a description:
## Phase 1: Parse Arguments
1. **Parse the description** for key information.
Determine the mode from the argument:
2. **Search the codebase** for related files using Grep/Glob to add context.
- No `analyze` keyword → **Description Mode**: generate a structured bug report from the provided description
- `analyze [path]`**Analyze Mode**: read the target file(s) and identify potential bugs
3. **Generate the bug report**:
If no argument is provided, ask the user for a bug description before proceeding.
---
## Phase 2A: Description Mode
1. **Parse the description** for key information: what broke, when, how to reproduce it, and what the expected behavior is.
2. **Search the codebase** for related files using Grep/Glob to add context (affected system, likely files).
3. **Draft the bug report**:
```markdown
# Bug Report
@ -65,11 +75,33 @@ When invoked with a description:
[Any additional context or observations]
```
When invoked with `analyze`:
---
1. **Read the target file(s)**.
2. **Identify potential bugs**: null references, off-by-one errors, race
conditions, unhandled edge cases, resource leaks, incorrect state
transitions.
3. **For each potential bug**, generate a bug report with the likely trigger
scenario and recommended fix.
## Phase 2B: Analyze Mode
1. **Read the target file(s)** specified in the argument.
2. **Identify potential bugs**: null references, off-by-one errors, race conditions, unhandled edge cases, resource leaks, incorrect state transitions.
3. **For each potential bug**, generate a bug report using the template above, with the likely trigger scenario and recommended fix filled in.
---
## Phase 3: Save Report
Present the completed bug report(s) to the user.
Ask: "May I write this to `production/qa/bugs/BUG-[NNNN].md`?"
If yes, write the file, creating the directory if needed. Verdict: **COMPLETE** — bug report filed.
If no, stop here. Verdict: **BLOCKED** — user declined write.
---
## Phase 4: Next Steps
After saving, suggest:
- Run `/bug-triage` to prioritize this bug alongside existing open bugs.
- If S1 or S2 severity, consider `/hotfix` for an emergency fix workflow.

View file

@ -226,7 +226,9 @@ After writing:
can be considered healthy. Run `/sprint-status` to see current capacity."
- If regression bugs exist: "Regressions found — consider re-opening the
affected stories in sprint tracking and running `/smoke-check` to re-gate."
- If no P1 bugs exist: "No P1 bugs — build is in good shape for QA hand-off."
- If no P1 bugs exist: "No P1 bugs — build is in good shape for QA hand-off." Verdict: **COMPLETE** — triage report written.
If user declined write: Verdict: **BLOCKED** — user declined write.
---

View file

@ -10,38 +10,43 @@ context: |
model: haiku
---
When this skill is invoked:
## Phase 1: Parse Arguments
1. **Read the argument** for the target version or sprint number. If a version
is given, use the corresponding git tag. If a sprint number is given, use
the sprint date range.
Read the argument for the target version or sprint number. If a version is given, use the corresponding git tag. If a sprint number is given, use the sprint date range.
1b. **Check git availability** — Verify the repository is initialized:
- Run `git rev-parse --is-inside-work-tree` to confirm git is available
- If not a git repo, inform the user and abort gracefully
Verify the repository is initialized: run `git rev-parse --is-inside-work-tree` to confirm git is available. If not a git repo, inform the user and abort gracefully.
2. **Read the git log** since the last tag or release:
```
git log --oneline [last-tag]..HEAD
```
If no tags exist, read the full log or a reasonable recent range (last 100
commits).
---
3. **Read sprint reports** from `production/sprints/` for the relevant period
to understand planned work and context behind changes.
## Phase 2: Gather Change Data
4. **Read completed design documents** from `design/gdd/` for any new features
that were implemented during this period.
Read the git log since the last tag or release:
5. **Categorize every change** into one of these categories:
- **New Features**: Entirely new gameplay systems, modes, or content
- **Improvements**: Enhancements to existing features, UX improvements,
performance gains
- **Bug Fixes**: Corrections to broken behavior
- **Balance Changes**: Tuning of gameplay values, difficulty, economy
- **Known Issues**: Issues the team is aware of but have not yet resolved
```
git log --oneline [last-tag]..HEAD
```
6. **Generate the INTERNAL changelog** (full technical detail):
If no tags exist, read the full log or a reasonable recent range (last 100 commits).
Read sprint reports from `production/sprints/` for the relevant period to understand planned work and context behind changes.
Read completed design documents from `design/gdd/` for any new features implemented during this period.
---
## Phase 3: Categorize Changes
Categorize every change into one of these categories:
- **New Features**: Entirely new gameplay systems, modes, or content
- **Improvements**: Enhancements to existing features, UX improvements, performance gains
- **Bug Fixes**: Corrections to broken behavior
- **Balance Changes**: Tuning of gameplay values, difficulty, economy
- **Known Issues**: Issues the team is aware of but have not yet resolved
---
## Phase 4: Generate Internal Changelog
```markdown
# Internal Changelog: [Version]
@ -84,7 +89,9 @@ Commits: [Count] ([first-hash]..[last-hash])
- Lines removed: [N]
```
7. **Generate the PLAYER-FACING changelog** (friendly, non-technical):
---
## Phase 5: Generate Player-Facing Changelog
```markdown
# What is New in [Version]
@ -116,19 +123,28 @@ Thank you for playing! Your feedback helps us make the game better.
Report issues at [link].
```
8. **Output both changelogs** to the user. The internal changelog is the
primary working document. The player-facing changelog is ready for
community posting after review.
---
## Phase 6: Output
Output both changelogs to the user. The internal changelog is the primary working document. The player-facing changelog is ready for community posting after review.
This skill is read-only — it outputs to conversation but does not write files. To save the output, copy it manually or use `/patch-notes` which includes a save step.
Verdict: **COMPLETE** — changelog generated.
---
## Phase 7: Next Steps
- Use `/patch-notes [version]` to generate a styled, saved version for public release.
- Use `/release-checklist` before publishing the changelog externally.
### Guidelines
- Never expose internal code references, file paths, or developer names in
the player-facing changelog
- Never expose internal code references, file paths, or developer names in the player-facing changelog
- Group related changes together rather than listing individual commits
- If a commit message is unclear, check the associated files and sprint data
for context
- Balance changes should always include the design reasoning, not just the
numbers
- Known issues should be honest -- players appreciate transparency
- If the git history is messy (merge commits, reverts, fixup commits), clean
up the narrative rather than listing every commit literally
- If a commit message is unclear, check the associated files and sprint data for context
- Balance changes should always include the design reasoning, not just the numbers
- Known issues should be honest — players appreciate transparency
- If the git history is messy (merge commits, reverts, fixup commits), clean up the narrative rather than listing every commit literally

View file

@ -8,96 +8,96 @@ context: fork
agent: lead-programmer
---
When this skill is invoked:
## Phase 1: Load Target Files
1. **Read the target file(s)** in full.
Read the target file(s) in full. Read CLAUDE.md for project coding standards.
2. **Read the CLAUDE.md** for project coding standards.
---
2.5. **Identify the active engine specialists** by reading `.claude/docs/technical-preferences.md`, section `## Engine Specialists`. Note:
- The **Primary** specialist (used for architecture and broad engine concerns)
- The **Language/Code Specialist** (used when reviewing the project's primary language files)
- The **Shader Specialist** (used when reviewing shader files)
- The **UI Specialist** (used when reviewing UI code)
- If the section reads `[TO BE CONFIGURED]`, no engine is pinned — skip engine specialist steps below.
## Phase 2: Identify Engine Specialists
3. **ADR Compliance Check**:
Read `.claude/docs/technical-preferences.md`, section `## Engine Specialists`. Note:
a. Search for ADR references in: the story file associated with this work (if
provided), any commit message context, and header comments in the files being
reviewed. Look for patterns like `ADR-NNN`, `ADR-[name]`, or
`docs/architecture/ADR-`.
- The **Primary** specialist (used for architecture and broad engine concerns)
- The **Language/Code Specialist** (used when reviewing the project's primary language files)
- The **Shader Specialist** (used when reviewing shader files)
- The **UI Specialist** (used when reviewing UI code)
b. If no ADR references are found, note:
> "No ADR references found — skipping ADR compliance check."
Then proceed to step 4.
If the section reads `[TO BE CONFIGURED]`, no engine is pinned — skip engine specialist steps.
c. For each referenced ADR: read `docs/architecture/ADR-NNN-*.md` and extract
the **Decision** and **Consequences** sections.
---
d. Check the implementation against each ADR:
- What pattern/approach was chosen in the Decision?
- Are there alternatives explicitly rejected in the ADR?
- Are there required guardrails or constraints in the Consequences?
## Phase 3: ADR Compliance Check
e. Classify any deviation found:
- **ARCHITECTURAL VIOLATION** (BLOCKING): Implementation uses a pattern
explicitly rejected in the ADR (e.g., ADR rejected singletons for game
state, but the code uses a singleton).
- **ADR DRIFT** (WARNING): Implementation diverges meaningfully from the
chosen approach without using an explicitly forbidden pattern (e.g., ADR
chose event-based communication but code uses direct method calls).
- **MINOR DEVIATION** (INFO): Small difference from ADR guidance that does
not affect the overall architecture (e.g., slightly different naming from
the ADR's example code).
Search for ADR references in the story file, commit messages, and header comments. Look for patterns like `ADR-NNN` or `docs/architecture/ADR-`.
f. Include ADR compliance findings in the review output under
`### ADR Compliance` before the Standards Compliance section.
If no ADR references found, note: "No ADR references found — skipping ADR compliance check."
4. **Identify the system category** (engine, gameplay, AI, networking, UI, tools)
and apply category-specific standards.
For each referenced ADR: read the file, extract the **Decision** and **Consequences** sections, then classify any deviation:
5. **Evaluate against coding standards**:
- [ ] Public methods and classes have doc comments
- [ ] Cyclomatic complexity under 10 per method
- [ ] No method exceeds 40 lines (excluding data declarations)
- [ ] Dependencies are injected (no static singletons for game state)
- [ ] Configuration values loaded from data files
- [ ] Systems expose interfaces (not concrete class dependencies)
- **ARCHITECTURAL VIOLATION** (BLOCKING): Uses a pattern explicitly rejected in the ADR
- **ADR DRIFT** (WARNING): Meaningfully diverges from the chosen approach without using a forbidden pattern
- **MINOR DEVIATION** (INFO): Small difference from ADR guidance that doesn't affect overall architecture
6. **Check architectural compliance**:
- [ ] Correct dependency direction (engine <- gameplay, not reverse)
- [ ] No circular dependencies between modules
- [ ] Proper layer separation (UI does not own game state)
- [ ] Events/signals used for cross-system communication
- [ ] Consistent with established patterns in the codebase
---
7. **Check SOLID compliance**:
- [ ] Single Responsibility: Each class has one reason to change
- [ ] Open/Closed: Extendable without modification
- [ ] Liskov Substitution: Subtypes substitutable for base types
- [ ] Interface Segregation: No fat interfaces
- [ ] Dependency Inversion: Depends on abstractions, not concretions
## Phase 4: Standards Compliance
8. **Check for common game development issues**:
- [ ] Frame-rate independence (delta time usage)
- [ ] No allocations in hot paths (update loops)
- [ ] Proper null/empty state handling
- [ ] Thread safety where required
- [ ] Resource cleanup (no leaks)
Identify the system category (engine, gameplay, AI, networking, UI, tools) and evaluate:
9. **Engine Specialist Review** — If an engine is configured (step 2.5), spawn engine specialists via Task in parallel with your own review above:
- Determine which specialist applies to each file being reviewed:
- Primary language files (`.gd`, `.cs`, `.cpp`) → Language/Code Specialist
- Shader files (`.gdshader`, `.hlsl`, shader graph) → Shader Specialist
- UI screen/widget code → UI Specialist
- Cross-cutting or unclear → Primary Specialist
- Spawn the relevant specialist(s) with: the file(s), the engine reference docs path (`docs/engine-reference/[engine]/`), and the task: "Review for engine-idiomatic patterns, deprecated or incorrect API usage, engine-specific performance concerns, and any patterns the engine's documentation recommends against."
- Also spawn the **Primary Specialist** for any file that touches engine architecture (scene structure, node hierarchy, component design, lifecycle hooks).
- Collect findings and include them in the review output under `### Engine Specialist Findings` (placed between `### Game-Specific Concerns` and `### Positive Observations`).
- If no engine is configured, omit the `### Engine Specialist Findings` section.
- [ ] Public methods and classes have doc comments
- [ ] Cyclomatic complexity under 10 per method
- [ ] No method exceeds 40 lines (excluding data declarations)
- [ ] Dependencies are injected (no static singletons for game state)
- [ ] Configuration values loaded from data files
- [ ] Systems expose interfaces (not concrete class dependencies)
10. **Output the review** in this format:
---
## Phase 5: Architecture and SOLID
**Architecture:**
- [ ] Correct dependency direction (engine <- gameplay, not reverse)
- [ ] No circular dependencies between modules
- [ ] Proper layer separation (UI does not own game state)
- [ ] Events/signals used for cross-system communication
- [ ] Consistent with established patterns in the codebase
**SOLID:**
- [ ] Single Responsibility: Each class has one reason to change
- [ ] Open/Closed: Extendable without modification
- [ ] Liskov Substitution: Subtypes substitutable for base types
- [ ] Interface Segregation: No fat interfaces
- [ ] Dependency Inversion: Depends on abstractions, not concretions
---
## Phase 6: Game-Specific Concerns
- [ ] Frame-rate independence (delta time usage)
- [ ] No allocations in hot paths (update loops)
- [ ] Proper null/empty state handling
- [ ] Thread safety where required
- [ ] Resource cleanup (no leaks)
---
## Phase 7: Engine Specialist Review
If an engine is configured, spawn engine specialists via Task in parallel with the review above. Determine which specialist applies to each file:
- Primary language files (`.gd`, `.cs`, `.cpp`) → Language/Code Specialist
- Shader files (`.gdshader`, `.hlsl`, shader graph) → Shader Specialist
- UI screen/widget code → UI Specialist
- Cross-cutting or unclear → Primary Specialist
Also spawn the **Primary Specialist** for any file touching engine architecture (scene structure, node hierarchy, lifecycle hooks).
Collect findings and include them under `### Engine Specialist Findings`.
---
## Phase 8: Output Review
```
## Code Review: [File/System Name]
@ -131,3 +131,13 @@ When this skill is invoked:
### Verdict: [APPROVED / APPROVED WITH SUGGESTIONS / CHANGES REQUIRED]
```
This skill is read-only — no files are written.
---
## Phase 9: Next Steps
- If verdict is APPROVED: run `/story-done [story-path]` to close the story.
- If verdict is CHANGES REQUIRED: fix the issues and re-run `/code-review`.
- If an ARCHITECTURAL VIOLATION is found: run `/architecture-decision` to record the correct approach.

View file

@ -0,0 +1,276 @@
---
name: consistency-check
description: "Scan all GDDs against the entity registry to detect cross-document inconsistencies: same entity with different stats, same item with different values, same formula with different variables. Grep-first approach — reads registry then targets only conflicting GDD sections rather than full document reads."
argument-hint: "[full | since-last-review | entity:<name> | item:<name>]"
user-invocable: true
allowed-tools: Read, Glob, Grep, Write, Edit, Bash
context: fork
---
# Consistency Check
Detects cross-document inconsistencies by comparing all GDDs against the
entity registry (`design/registry/entities.yaml`). Uses a grep-first approach:
reads the registry once, then targets only the GDD sections that mention
registered names — no full document reads unless a conflict needs investigation.
**This skill is the write-time safety net.** It catches what `/design-system`'s
per-section checks may have missed and what `/review-all-gdds`'s holistic review
catches too late.
**When to run:**
- After writing each new GDD (before moving to the next system)
- Before `/review-all-gdds` (so that skill starts with a clean baseline)
- Before `/create-architecture` (inconsistencies poison downstream ADRs)
- On demand: `/consistency-check entity:[name]` to check one entity specifically
**Output:** Conflict report + optional registry corrections
---
## Phase 1: Parse Arguments and Load Registry
**Modes:**
- No argument / `full` — check all registered entries against all GDDs
- `since-last-review` — check only GDDs modified since the last review report
- `entity:<name>` — check one specific entity across all GDDs
- `item:<name>` — check one specific item across all GDDs
**Load the registry:**
```
Read path="design/registry/entities.yaml"
```
If the file does not exist or has no entries:
> "Entity registry is empty. Run `/design-system` to write GDDs — the registry
> is populated automatically after each GDD is completed. Nothing to check yet."
Stop and exit.
Build four lookup tables from the registry:
- **entity_map**: `{ name → { source, attributes, referenced_by } }`
- **item_map**: `{ name → { source, value_gold, weight, ... } }`
- **formula_map**: `{ name → { source, variables, output_range } }`
- **constant_map**: `{ name → { source, value, unit } }`
Count total registered entries. Report:
```
Registry loaded: [N] entities, [N] items, [N] formulas, [N] constants
Scope: [full | since-last-review | entity:name]
```
---
## Phase 2: Locate In-Scope GDDs
```
Glob pattern="design/gdd/*.md"
```
Exclude: `game-concept.md`, `systems-index.md`, `game-pillars.md` — these are
not system GDDs.
For `since-last-review` mode:
```bash
git log --name-only --pretty=format: -- design/gdd/ | grep "\.md$" | sort -u
```
Limit to GDDs modified since the most recent `design/gdd/gdd-cross-review-*.md`
file's creation date.
Report the in-scope GDD list before scanning.
---
## Phase 3: Grep-First Conflict Scan
For each registered entry, grep every in-scope GDD for the entry's name.
Do NOT do full reads — extract only the matching lines and their immediate
context (-C 3 lines).
This is the core optimization: instead of reading 10 GDDs × 400 lines each
(4,000 lines), you grep 50 entity names × 10 GDDs (50 targeted searches,
each returning ~10 lines on a hit).
### 3a: Entity Scan
For each entity in entity_map:
```
Grep pattern="[entity_name]" glob="design/gdd/*.md" output_mode="content" -C 3
```
For each GDD hit, extract the values mentioned near the entity name:
- any numeric attributes (counts, costs, durations, ranges, rates)
- any categorical attributes (types, tiers, categories)
- any derived values (totals, outputs, results)
- any other attributes registered in entity_map
Compare extracted values against the registry entry.
**Conflict detection:**
- Registry says `[entity_name].[attribute] = [value_A]`. GDD says `[entity_name] has [value_B]`. → **CONFLICT**
- Registry says `[item_name].[attribute] = [value_A]`. GDD says `[item_name] is [value_B]`. → **CONFLICT**
- GDD mentions `[entity_name]` but doesn't specify the attribute. → **NOTE** (no conflict, just unverifiable)
### 3b: Item Scan
For each item in item_map, grep all GDDs for the item name. Extract:
- sell price / value / gold value
- weight
- stack rules (stackable / non-stackable)
- category
Compare against registry entry values.
### 3c: Formula Scan
For each formula in formula_map, grep all GDDs for the formula name. Extract:
- variable names mentioned near the formula
- output range or cap values mentioned
Compare against registry entry:
- Different variable names → **CONFLICT**
- Output range stated differently → **CONFLICT**
### 3d: Constant Scan
For each constant in constant_map, grep all GDDs for the constant name. Extract:
- Any numeric value mentioned near the constant name
Compare against registry value:
- Different number → **CONFLICT**
---
## Phase 4: Deep Investigation (Conflicts Only)
For each conflict found in Phase 3, do a targeted full-section read of the
conflicting GDD to get precise context:
```
Read path="design/gdd/[conflicting_gdd].md"
```
(Or use Grep with wider context if the file is large)
Confirm the conflict with full context. Determine:
1. **Which GDD is correct?** Check the `source:` field in the registry — the
source GDD is the authoritative owner. Any other GDD that contradicts it
is the one that needs updating.
2. **Is the registry itself out of date?** If the source GDD was updated after
the registry entry was written (check git log), the registry may be stale.
3. **Is this a genuine design change?** If the conflict represents an intentional
design decision, the resolution is: update the source GDD, update the registry,
then fix all other GDDs.
For each conflict, classify:
- **🔴 CONFLICT** — same named entity/item/formula/constant with different values
in different GDDs. Must resolve before architecture begins.
- **⚠️ STALE REGISTRY** — source GDD value changed but registry not updated.
Registry needs updating; other GDDs may be correct already.
- ** UNVERIFIABLE** — entity mentioned but no comparable attribute stated.
Not a conflict; just noting the reference.
---
## Phase 5: Output Report
```
## Consistency Check Report
Date: [date]
Registry entries checked: [N entities, N items, N formulas, N constants]
GDDs scanned: [N] ([list names])
---
### Conflicts Found (must resolve before architecture)
🔴 [Entity/Item/Formula/Constant Name]
Registry (source: [gdd]): [attribute] = [value]
Conflict in [other_gdd].md: [attribute] = [different_value]
→ Resolution needed: [which doc to change and to what]
---
### Stale Registry Entries (registry behind the GDD)
⚠️ [Entry Name]
Registry says: [value] (written [date])
Source GDD now says: [new value]
→ Update registry entry to match source GDD, then check referenced_by docs.
---
### Unverifiable References (no conflict, informational)
[gdd].md mentions [entity_name] but states no comparable attributes.
No conflict detected. No action required.
---
### Clean Entries (no issues found)
✅ [N] registry entries verified across all GDDs with no conflicts.
---
Verdict: PASS | CONFLICTS FOUND
```
**Verdict:**
- **PASS** — no conflicts. Registry and GDDs agree on all checked values.
- **CONFLICTS FOUND** — one or more conflicts detected. List resolution steps.
---
## Phase 6: Registry Corrections
If stale registry entries were found, ask:
> "May I update `design/registry/entities.yaml` to fix the [N] stale entries?"
For each stale entry:
- Update the `value` / attribute field
- Set `revised:` to today's date
- Add a YAML comment with the old value: `# was: [old_value] before [date]`
If new entries were found in GDDs that are not in the registry, ask:
> "Found [N] entities/items mentioned in GDDs that aren't in the registry yet.
> May I add them to `design/registry/entities.yaml`?"
Only add entries that appear in more than one GDD (true cross-system facts).
**Never delete registry entries.** Set `status: deprecated` if an entry is removed
from all GDDs.
After writing: Verdict: **COMPLETE** — consistency check finished.
If conflicts remain unresolved: Verdict: **BLOCKED** — [N] conflicts need manual resolution before architecture begins.
### 6b: Append to Reflexion Log
If any 🔴 CONFLICT entries were found (regardless of whether they were resolved),
append an entry to `docs/consistency-failures.md` for each conflict:
```markdown
### [YYYY-MM-DD] — /consistency-check — 🔴 CONFLICT
**Domain**: [system domain(s) involved]
**Documents involved**: [source GDD] vs [conflicting GDD]
**What happened**: [specific conflict — entity name, attribute, differing values]
**Resolution**: [how it was fixed, or "Unresolved — manual action needed"]
**Pattern**: [generalised lesson, e.g. "Item values defined in combat GDD were not
referenced in economy GDD before authoring — always check entities.yaml first"]
```
Only append if `docs/consistency-failures.md` exists. If the file is missing,
skip this step silently — do not create the file from this skill.
---
## Next Steps
- **If PASS**: Run `/review-all-gdds` for holistic design-theory review, or
`/create-architecture` if all MVP GDDs are complete.
- **If CONFLICTS FOUND**: Fix the flagged GDDs, then re-run
`/consistency-check` to confirm resolution.
- **If STALE REGISTRY**: Update the registry (Phase 6), then re-run to verify.
- Run `/consistency-check` after writing each new GDD to catch issues early,
not at architecture time.

View file

@ -1,7 +1,7 @@
---
name: content-audit
description: "Audit GDD-specified content counts against implemented content. Identifies what's planned vs built."
argument-hint: "[system-name|--summary]"
argument-hint: "[system-name | --summary | (no arg = full audit)]"
user-invocable: true
allowed-tools: Read, Glob, Grep, Write
context: fork
@ -131,7 +131,9 @@ Flag a system as `HIGH PRIORITY` in the report if:
### Full audit and single-system modes
Write the report to `docs/content-audit-[YYYY-MM-DD].md`:
Present the gap table and summary to the user. Ask: "May I write the full report to `docs/content-audit-[YYYY-MM-DD].md`?"
If yes, write the file:
```markdown
# Content Audit — [Date]
@ -187,3 +189,17 @@ to `/create-stories [epic-slug]` or `/quick-design` depending on the size of the
Print the Gap Table and Summary directly to conversation. Do not write a file.
End with: "Run `/content-audit` without `--summary` to write the full report."
---
## Phase 5 — Next Steps
After the audit, recommend the highest-value follow-up actions:
- If any system is `NOT STARTED` and MVP-tagged → "Run `/design-system [name]` to
add missing content counts to the GDD before implementation begins."
- If total gap is >50% → "Run `/sprint-plan` to allocate content work across upcoming sprints."
- If backlog stories are needed → "Run `/create-stories [epic-slug]` for each HIGH PRIORITY gap."
- If `--summary` was used → "Run `/content-audit` (no flag) to write the full report to `docs/`."
Verdict: **COMPLETE** — content audit finished.

View file

@ -246,7 +246,7 @@ After writing the manifest:
1. **Load silently** — read all inputs before presenting anything
2. **Show the summary first** — let the user see the scope before writing
3. **Ask before writing** — always confirm before creating or overwriting the manifest
3. **Ask before writing** — always confirm before creating or overwriting the manifest. On write: Verdict: **COMPLETE** — control manifest written. On decline: Verdict: **BLOCKED** — user declined write.
4. **Source every rule** — never add a rule that doesn't trace to an ADR, a
technical preference, or an engine reference doc
5. **No interpretation** — extract rules as stated in ADRs; do not paraphrase

View file

@ -114,7 +114,9 @@ Options: "Yes, create it", "Skip", "Pause — I need to write ADRs first"
## 5. Write Epic Files
After approval, write:
After approval, ask: "May I write the epic file to `production/epics/[epic-slug]/EPIC.md`?"
After user confirms, write:
### `production/epics/[epic-slug]/EPIC.md`
@ -193,3 +195,8 @@ After writing all epics for the requested scope:
3. **Ask before writing** — per-epic approval before writing any file
4. **No invention** — all content comes from GDDs, ADRs, and architecture docs
5. **Never create stories** — this skill stops at the epic level
After all requested epics are processed:
- **Verdict: COMPLETE** — [N] epic(s) written. Run `/create-stories [epic-slug]` per epic.
- **Verdict: BLOCKED** — user declined all epics, or no eligible systems found.

View file

@ -232,3 +232,8 @@ what must be DONE before you can start it."
4. **Ask before writing** — get approval for the full story set before writing files
5. **No invention** — acceptance criteria come from GDDs, implementation notes from ADRs, rules from the manifest
6. **Never start implementation** — this skill stops at the story file level
After writing (or declining):
- **Verdict: COMPLETE** — [N] stories written to `production/epics/[epic-slug]/`. Run `/story-readiness``/dev-story` to begin implementation.
- **Verdict: BLOCKED** — user declined. No story files written.

View file

@ -8,41 +8,47 @@ context: fork
agent: Explore
---
When this skill is invoked:
## Phase 1: Load Documents
1. **Read the target design document** in full.
Read the target design document in full. Read CLAUDE.md to understand project context and standards. Read related design documents referenced or implied by the target doc (check `design/gdd/` for related systems).
2. **Read the master CLAUDE.md** to understand project context and standards.
---
3. **Read related design documents** referenced or implied by the target doc
(check `design/gdd/` for related systems).
## Phase 2: Completeness Check
4. **Evaluate against the Design Document Standard checklist**:
- [ ] Has Overview section (one-paragraph summary)
- [ ] Has Player Fantasy section (intended feeling)
- [ ] Has Detailed Rules section (unambiguous mechanics)
- [ ] Has Formulas section (all math defined with variables)
- [ ] Has Edge Cases section (unusual situations handled)
- [ ] Has Dependencies section (other systems listed)
- [ ] Has Tuning Knobs section (configurable values identified)
- [ ] Has Acceptance Criteria section (testable success conditions)
Evaluate against the Design Document Standard checklist:
5. **Check for internal consistency**:
- Do the formulas produce values that match the described behavior?
- Do edge cases contradict the main rules?
- Are dependencies bidirectional (does the other system know about this one)?
- [ ] Has Overview section (one-paragraph summary)
- [ ] Has Player Fantasy section (intended feeling)
- [ ] Has Detailed Rules section (unambiguous mechanics)
- [ ] Has Formulas section (all math defined with variables)
- [ ] Has Edge Cases section (unusual situations handled)
- [ ] Has Dependencies section (other systems listed)
- [ ] Has Tuning Knobs section (configurable values identified)
- [ ] Has Acceptance Criteria section (testable success conditions)
6. **Check for implementability**:
- Are the rules precise enough for a programmer to implement without guessing?
- Are there any "hand-wave" sections where details are missing?
- Are performance implications considered?
---
7. **Check for cross-system consistency**:
- Does this conflict with any existing mechanic?
- Does this create unintended interactions with other systems?
- Is this consistent with the game's established tone and pillars?
## Phase 3: Consistency and Implementability
8. **Output the review** in this format:
**Internal consistency:**
- Do the formulas produce values that match the described behavior?
- Do edge cases contradict the main rules?
- Are dependencies bidirectional (does the other system know about this one)?
**Implementability:**
- Are the rules precise enough for a programmer to implement without guessing?
- Are there any "hand-wave" sections where details are missing?
- Are performance implications considered?
**Cross-system consistency:**
- Does this conflict with any existing mechanic?
- Does this create unintended interactions with other systems?
- Is this consistent with the game's established tone and pillars?
---
## Phase 4: Output Review
```
## Design Review: [Document Title]
@ -65,18 +71,19 @@ When this skill is invoked:
### Verdict: [APPROVED / NEEDS REVISION / MAJOR REVISION NEEDED]
```
9. **Contextual next step recommendations**:
- If the document being reviewed is `game-concept.md` or `game-pillars.md`:
- Check if `design/gdd/systems-index.md` exists
- If it does NOT exist, add to Recommendations:
> "This concept is ready for systems decomposition. Run `/map-systems`
> to break it down into individual systems with dependencies and priorities,
> then write per-system GDDs."
- If the document is an individual system GDD:
- Check if the systems index references this system
- If verdict is APPROVED: suggest "Update the systems index status for
this system to 'Approved'."
- If verdict is NEEDS REVISION or MAJOR REVISION NEEDED: suggest "Update
the systems index status for this system to 'In Review'."
- Note: This skill is read-only. The user (or `/design-system`) must
perform the actual status update in the systems index.
This skill is read-only — no files are written.
---
## Phase 5: Next Steps
If the document being reviewed is `game-concept.md` or `game-pillars.md`:
- Check if `design/gdd/systems-index.md` exists. If not, recommend: "Run `/map-systems` to break the concept down into individual systems with dependencies and priorities, then write per-system GDDs."
If the document is an individual system GDD:
- If verdict is APPROVED: suggest updating the system's status to 'Approved' in the systems index.
- If verdict is NEEDS REVISION or MAJOR REVISION NEEDED: suggest updating the status to 'In Review'.
Next skill options:
- APPROVED → `/create-epics` or `/map-systems`
- NEEDS REVISION → revise the doc then re-run `/design-review`

View file

@ -1,7 +1,7 @@
---
name: design-system
description: "Guided, section-by-section GDD authoring for a single game system. Gathers context from existing docs, walks through each required section collaboratively, cross-references dependencies, and writes incrementally to file."
argument-hint: "<system-name> (e.g., 'combat-system', 'inventory', 'dialogue')"
argument-hint: "<system-name> (e.g., 'movement', 'progression', 'dialogue')"
user-invocable: true
allowed-tools: Read, Glob, Grep, Write, Edit, Task, AskUserQuestion, TodoWrite
---
@ -11,8 +11,8 @@ When this skill is invoked:
## 1. Parse Arguments & Validate
A system name or retrofit path is **required**. If missing, fail with:
> "Usage: `/design-system <system-name>` — e.g., `/design-system combat-system`
> Or to fill gaps in an existing GDD: `/design-system retrofit design/gdd/combat-system.md`
> "Usage: `/design-system <system-name>` — e.g., `/design-system movement`
> Or to fill gaps in an existing GDD: `/design-system retrofit design/gdd/[system-name].md`
> Run `/map-systems` first to create the systems index, then use this skill
> to write individual system GDDs."
@ -66,6 +66,16 @@ primary advantage over ad-hoc design — it arrives informed.
- **Target system**: Find the system in the index. If not listed, warn:
> "[system-name] is not in the systems index. Would you like to add it, or
> design it as an off-index system?"
- **Entity registry**: Read `design/registry/entities.yaml` if it exists.
Extract all entries referenced by or relevant to this system (grep
`referenced_by.*[system-name]` and `source.*[system-name]`). Hold these
in context as **known facts** — values that other GDDs have already
established and this GDD must not contradict.
- **Reflexion log**: Read `docs/consistency-failures.md` if it exists.
Extract entries whose Domain matches this system's category. These are
recurring conflict patterns — present them under "Past failure patterns"
in the Phase 2d context summary so the user knows where mistakes have
occurred before in this domain.
### 2b: Dependency Reads
@ -87,8 +97,8 @@ For each dependency GDD that exists, extract and hold in context:
- **Existing GDD**: Read `design/gdd/[system-name].md` if it exists (resume, don't
restart from scratch)
- **Related GDDs**: Glob `design/gdd/*.md` and read any that are thematically related
(e.g., if designing "status-effects", also read "combat-system" even if it's not
a direct dependency)
(e.g., if designing a system that overlaps with another in scope, read the related GDD
even if it's not a formal dependency)
### 2d: Present Context Summary
@ -100,6 +110,15 @@ Before starting design work, present a brief summary to the user:
> - Depended on by: [list, noting which have GDDs vs. undesigned]
> - Existing decisions to respect: [key constraints from dependency GDDs]
> - Pillar alignment: [which pillar(s) this system primarily serves]
> - **Known cross-system facts (from registry):**
> - [entity_name]: [attribute]=[value], [attribute]=[value] (owned by [source GDD])
> - [item_name]: [attribute]=[value], [attribute]=[value] (owned by [source GDD])
> - [formula_name]: variables=[list], output=[minmax] (owned by [source GDD])
> - [constant_name]: [value] [unit] (owned by [source GDD])
> *(These values are locked — if this GDD needs different values, surface
> the conflict before writing. Do not silently use different numbers.)*
>
> If no registry entries are relevant: omit the "Known cross-system facts" section.
If any upstream dependencies are undesigned, warn:
> "[dependency] doesn't have a GDD yet. We'll need to make assumptions about
@ -287,6 +306,17 @@ Context -> Questions -> Options -> Decision -> Draft -> Approval ->
7. **Write**: Use the Edit tool to replace the `[To be designed]` placeholder with
the approved content. Confirm the write.
8. **Registry conflict check** (Sections C and D only — Detailed Design and Formulas):
After writing, scan the section content for entity names, item names, formula
names, and numeric constants that appear in the registry. For each match:
- Compare the value just written against the registry entry.
- If they differ: **surface the conflict immediately** before starting the next
section. Do not continue silently.
> "Registry conflict: [name] is registered in [source GDD] as [registry_value].
> This section just wrote [new_value]. Which is correct?"
- If new (not in registry): flag it as a candidate for registry registration
(will be handled in Phase 5).
After writing each section, update `production/session-state/active.md` with the
completed section name.
@ -347,8 +377,8 @@ This is usually the largest section. Break it into sub-sections:
mechanical modeling. Provide the full context gathered in Phase 2.
**Cross-reference**: For each interaction listed, verify it matches what the
dependency GDD specifies. If the dependency says "damage is calculated as X" and
this system expects something different, flag the conflict.
dependency GDD specifies. If a dependency defines a value or formula and this
system expects something different, flag the conflict.
---
@ -357,13 +387,32 @@ this system expects something different, flag the conflict.
**Goal**: Every mathematical formula, with variables defined, ranges specified,
and edge cases noted.
**Completion Steering — always begin each formula with this exact structure:**
```
The [formula_name] formula is defined as:
`[formula_name] = [expression]`
**Variables:**
| Variable | Symbol | Type | Range | Description |
|----------|--------|------|-------|-------------|
| [name] | [sym] | float/int | [minmax] | [what it represents] |
**Output Range:** [min] to [max] under normal play; [behaviour at extremes]
**Example:** [worked example with real numbers]
```
Do NOT write `[Formula TBD]` or describe a formula in prose without the variable
table. A formula without defined variables cannot be implemented without guesswork.
**Questions to ask**:
- What are the core calculations this system performs?
- Should scaling be linear, logarithmic, or stepped?
- What should the output ranges be at early/mid/late game?
**Agent delegation**: For formula-heavy systems (combat, economy, progression),
delegate to `systems-designer` via the Task tool. Provide:
**Agent delegation**: For formula-heavy systems, delegate to `systems-designer`
via the Task tool. Provide:
- The Core Rules from Section C (already written to file)
- Tuning goals from the user
- Balance context from dependency GDDs
@ -380,18 +429,28 @@ this system, reference it explicitly. Don't reinvent — connect.
**Goal**: Explicitly handle unusual situations so they don't become bugs.
**Completion Steering — format each edge case as:**
- **If [condition]**: [exact outcome]. [rationale if non-obvious]
Example (adapt terminology to the game's domain):
- **If [resource] reaches 0 while [protective condition] is active**: hold at minimum until condition ends, then apply consequence.
- **If two [triggers/events] fire simultaneously**: resolve in [defined priority order]; ties use [defined tiebreak rule].
Do NOT write vague entries like "handle appropriately" — each must name the exact
condition and the exact resolution. An edge case without a resolution is an open
design question, not a specification.
**Questions to ask**:
- What happens at zero? At maximum? At negative values?
- What happens when two effects trigger simultaneously?
- What happens if the player tries to exploit this? (Identify degenerate strategies)
- What happens at zero? At maximum? At out-of-range values?
- What happens when two rules apply at the same time?
- What happens if a player finds an unintended interaction? (Identify degenerate strategies)
**Agent delegation**: For systems with complex interactions, delegate to
`systems-designer` to identify edge cases from the formula space. For narrative
systems, consult `narrative-director` for story-breaking edge cases.
**Cross-reference**: Check edge cases against dependency GDDs. If combat says
"damage cannot go below 1" but this system can reduce damage to 0, that's a
conflict to resolve.
**Cross-reference**: Check edge cases against dependency GDDs. If a dependency
defines a floor, cap, or resolution rule that this system could violate, flag it.
---
@ -433,6 +492,17 @@ reference them here. Don't create duplicate knobs — point to the source of tru
**Goal**: Testable conditions that prove the system works as designed.
**Completion Steering — format each criterion as Given-When-Then:**
- **GIVEN** [initial state], **WHEN** [action or trigger], **THEN** [measurable outcome]
Example (adapt terminology to the game's domain):
- **GIVEN** [initial state], **WHEN** [player action or system trigger], **THEN** [specific measurable outcome].
- **GIVEN** [a constraint is active], **WHEN** [player attempts an action], **THEN** [feedback shown and action result].
Include at least: one criterion per core rule from Section C, and one per formula
from Section D. Do NOT write "the system works as designed" — every criterion must
be independently verifiable by a QA tester without reading the GDD.
**Questions to ask**:
- What's the minimum set of tests that prove this works?
- What performance budget does this system get? (frame time, memory)
@ -487,7 +557,37 @@ the source of truth). Verify:
- Dependencies are listed with interfaces
- Acceptance criteria are testable
### 5b: Offer Design Review
### 5b: Update Entity Registry
Scan the completed GDD for cross-system facts that should be registered:
- Named entities (enemies, NPCs, bosses) with stats or drops
- Named items with values, weights, or categories
- Named formulas with defined variables and output ranges
- Named constants referenced by value in more than one place
For each candidate, check if it already exists in `design/registry/entities.yaml`:
```
Grep pattern=" - name: [candidate_name]" path="design/registry/entities.yaml"
```
Present a summary:
```
Registry candidates from this GDD:
NEW (not yet registered):
- [entity_name] [entity]: [attribute]=[value], [attribute]=[value]
- [item_name] [item]: [attribute]=[value], [attribute]=[value]
- [formula_name] [formula]: variables=[list], output=[minmax]
ALREADY REGISTERED (referenced_by will be updated):
- [constant_name] [constant]: value=[N] ← matches registry ✅
```
Ask: "May I update `design/registry/entities.yaml` with these [N] new entries
and update `referenced_by` for the existing entries?"
If yes: append new entries and update `referenced_by` arrays. Never modify
existing `value` / attribute fields without surfacing it as a conflict first.
### 5c: Offer Design Review
Present a completion summary:

View file

@ -3,7 +3,7 @@ name: dev-story
description: "Read a story file and implement it. Loads the full context (story, GDD requirement, ADR guidelines, control manifest), routes to the right programmer agent for the system and engine, implements the code and test, and confirms each acceptance criterion. The core implementation skill — run after /story-readiness, before /code-review and /story-done."
argument-hint: "[story-path]"
user-invocable: true
allowed-tools: Read, Glob, Grep, Write, Edit, Bash, Task
allowed-tools: Read, Glob, Grep, Write, Bash, Task
context: fork
---
@ -252,6 +252,7 @@ Common blockers:
## Collaborative Protocol
- **File writes are delegated** — all source code, test files, and evidence docs are written by sub-agents spawned via Task. Each sub-agent enforces the "May I write to [path]?" protocol individually. This orchestrator does not write files directly.
- **Load before implementing** — do not start coding until all context is loaded
(story, TR-ID, ADR, manifest, engine prefs). Incomplete context produces code
that drifts from design.

View file

@ -6,51 +6,51 @@ user-invocable: true
allowed-tools: Read, Glob, Grep
---
When this skill is invoked:
## Phase 1: Understand the Task
1. **Read the task description** from the argument. If the description is too
vague to estimate meaningfully, ask for clarification before proceeding.
Read the task description from the argument. If the description is too vague to estimate meaningfully, ask for clarification before proceeding.
2. **Read CLAUDE.md** for project context: tech stack, coding standards,
architectural patterns, and any estimation guidelines.
Read CLAUDE.md for project context: tech stack, coding standards, architectural patterns, and any estimation guidelines.
3. **Read relevant design documents** from `design/gdd/` if the task relates
to a documented feature or system.
Read relevant design documents from `design/gdd/` if the task relates to a documented feature or system.
4. **Scan the codebase** to understand the systems affected by this task:
- Identify files and modules that would need to change
- Assess the complexity of those files (size, dependency count, cyclomatic
complexity)
- Identify integration points with other systems
- Check for existing test coverage in the affected areas
---
5. **Read past sprint data** from `production/sprints/` if available:
- Look for similar completed tasks and their actual effort
- Calculate historical velocity (planned vs actual)
- Identify any estimation bias patterns (consistently over or under)
## Phase 2: Scan Affected Code
6. **Analyze the following factors**:
Identify files and modules that would need to change:
**Code Complexity**:
- Lines of code in affected files
- Number of dependencies and coupling level
- Whether this touches core/engine code vs leaf/feature code
- Whether existing patterns can be followed or new patterns are needed
- Assess complexity (size, dependency count, cyclomatic complexity)
- Identify integration points with other systems
- Check for existing test coverage in the affected areas
- Read past sprint data from `production/sprints/` for similar completed tasks and historical velocity
**Scope**:
- Number of systems touched
- New code vs modification of existing code
- Amount of new test coverage required
- Data migration or configuration changes needed
---
**Risk**:
- New technology or unfamiliar libraries
- Unclear or ambiguous requirements
- Dependencies on unfinished work
- Cross-system integration complexity
- Performance sensitivity
## Phase 3: Analyze Complexity Factors
7. **Generate the estimate**:
**Code Complexity:**
- Lines of code in affected files
- Number of dependencies and coupling level
- Whether this touches core/engine code vs leaf/feature code
- Whether existing patterns can be followed or new patterns are needed
**Scope:**
- Number of systems touched
- New code vs modification of existing code
- Amount of new test coverage required
- Data migration or configuration changes needed
**Risk:**
- New technology or unfamiliar libraries
- Unclear or ambiguous requirements
- Dependencies on unfinished work
- Cross-system integration complexity
- Performance sensitivity
---
## Phase 4: Generate the Estimate
```markdown
## Task Estimate: [Task Name]
@ -65,99 +65,67 @@ Generated: [Date]
|--------|-----------|-------|
| Systems affected | [List] | [Core, gameplay, UI, etc.] |
| Files likely modified | [Count] | [Key files listed below] |
| New code vs modification | [Ratio, e.g., 70% new / 30% modification] | |
| New code vs modification | [Ratio] | |
| Integration points | [Count] | [Which systems interact] |
| Test coverage needed | [Low / Medium / High] | [Unit, integration, manual] |
| Existing patterns available | [Yes / Partial / No] | [Can follow existing code or new ground] |
| Test coverage needed | [Low / Medium / High] | |
| Existing patterns available | [Yes / Partial / No] | |
**Key files likely affected:**
- `[path/to/file1]` -- [what changes here]
- `[path/to/file2]` -- [what changes here]
- `[path/to/file3]` -- [what changes here]
### Effort Estimate
| Scenario | Days | Assumption |
|----------|------|------------|
| Optimistic | [X] | Everything goes right, no surprises, requirements are clear |
| Expected | [Y] | Normal pace, minor issues, one round of review feedback |
| Pessimistic | [Z] | Significant unknowns surface, blocked for a day, requirements change |
| Optimistic | [X] | Everything goes right, no surprises |
| Expected | [Y] | Normal pace, minor issues, one round of review |
| Pessimistic | [Z] | Significant unknowns surface, blocked for a day |
**Recommended budget: [Y days]**
[If historical data is available: "Based on [N] similar tasks that averaged
[X] days actual vs [Y] days estimated, a [correction factor] adjustment has
been applied."]
### Confidence: [High / Medium / Low]
**High** -- Clear requirements, familiar systems, follows existing patterns,
similar tasks completed before.
**Medium** -- Some unknowns, touches moderately complex systems, partial
precedent from previous work.
**Low** -- Significant unknowns, new technology, unclear requirements, or
cross-cutting concerns across many systems.
[Explain which factors drive the confidence level for this specific task.]
### Risk Factors
| Risk | Likelihood | Impact | Mitigation |
|------|-----------|--------|------------|
| [Specific risk] | [High/Med/Low] | [Days added if realized] | [How to reduce] |
| [Another risk] | [Likelihood] | [Impact] | [Mitigation] |
### Dependencies
| Dependency | Status | Impact if Delayed |
|-----------|--------|-------------------|
| [What must be done first] | [Done / In Progress / Not Started] | [How it affects this task] |
### Suggested Breakdown
| # | Sub-task | Estimate | Notes |
|---|----------|----------|-------|
| 1 | [Research / spike] | [X days] | [If unknowns need investigation first] |
| 2 | [Core implementation] | [X days] | [The main work] |
| 3 | [Integration with system X] | [X days] | [Connecting to existing code] |
| 4 | [Testing and validation] | [X days] | [Writing tests, manual verification] |
| 5 | [Code review and iteration] | [X days] | [Review feedback, fixes] |
| 1 | [Research / spike] | [X days] | |
| 2 | [Core implementation] | [X days] | |
| 3 | [Testing and validation] | [X days] | |
| | **Total** | **[Y days]** | |
### Historical Comparison
[If similar tasks exist in sprint history:]
| Similar Task | Estimated | Actual | Relevant Difference |
|-------------|-----------|--------|-------------------|
| [Past task 1] | [X days] | [Y days] | [What makes it similar/different] |
| [Past task 2] | [X days] | [Y days] | [What makes it similar/different] |
### Notes and Assumptions
- [Key assumption that affects the estimate]
- [Another assumption]
- [Any caveats about scope boundaries -- what is included vs excluded]
- [Recommendations: e.g., "Consider a spike first if requirement X is unclear"]
- [Any caveats about scope boundaries]
```
8. **Output the estimate** to the user with a brief summary: recommended
budget, confidence level, and the single biggest risk factor.
Output the estimate with a brief summary: recommended budget, confidence level, and the single biggest risk factor.
This skill is read-only — no files are written. Verdict: **COMPLETE** — estimate generated.
---
## Phase 5: Next Steps
- If confidence is Low: recommend a time-boxed spike (`/prototype`) before committing.
- If the task is > 10 days: recommend breaking it into smaller stories via `/create-stories`.
- To schedule the task: run `/sprint-plan update` to add it to the next sprint.
### Guidelines
- Always give a range (optimistic / expected / pessimistic), never a single
number. Single-point estimates create false precision.
- The recommended budget should be the expected estimate, not the optimistic
one. Padding is not dishonest -- it is realistic.
- If confidence is Low, recommend a time-boxed spike or prototype before
committing to the full estimate.
- Be explicit about what is included and excluded. Scope ambiguity is the
most common source of estimation error.
- Round to half-day increments. Estimating in hours implies false precision
for tasks longer than a day.
- If the task is too large to estimate confidently (more than 10 days
expected), recommend breaking it into smaller tasks and estimating those
individually.
- Do not pad estimates silently. If risk exists, call it out explicitly in
the risk factors section so the team can decide how to handle it.
- Always give a range (optimistic / expected / pessimistic), never a single number
- The recommended budget should be the expected estimate, not the optimistic one
- Round to half-day increments — estimating in hours implies false precision for tasks longer than a day
- Do not pad estimates silently — call out risk explicitly so the team can decide

View file

@ -217,6 +217,13 @@ The project progresses through these stages:
## 3. Run the Gate Check
**Before running artifact checks**, read `docs/consistency-failures.md` if it exists.
Extract entries whose Domain matches the target phase (e.g., if checking
Systems Design → Technical Setup, pull entries in Economy, Combat, or any GDD domain;
if checking Technical Setup → Pre-Production, pull entries in Architecture, Engine).
Carry these as context — recurring conflict patterns in the target domain warrant
increased scrutiny on those specific checks.
For each item in the target gate:
### Artifact Checks
@ -285,6 +292,46 @@ For items that can't be automatically verified, **ask the user**:
---
## 5a. Chain-of-Verification
After drafting the verdict in Phase 5, challenge it before finalising.
**Step 1 — Generate 5 challenge questions** designed to disprove the verdict:
For a **PASS** draft:
- "Which quality checks did I verify by actually reading a file, vs. inferring they passed?"
- "Are there MANUAL CHECK NEEDED items I marked PASS without user confirmation?"
- "Did I confirm all listed artifacts have real content, not just empty headers?"
- "Could any blocker I dismissed as minor actually prevent the phase from succeeding?"
- "Which single check am I least confident in, and why?"
For a **CONCERNS** draft:
- "Could any listed CONCERN be elevated to a blocker given the project's current state?"
- "Is the concern resolvable within the next phase, or does it compound over time?"
- "Did I soften any FAIL condition into a CONCERN to avoid a harder verdict?"
- "Are there artifacts I didn't check that could reveal additional blockers?"
- "Do all the CONCERNS together create a blocking problem even if each is minor alone?"
For a **FAIL** draft:
- "Have I accurately separated hard blockers from strong recommendations?"
- "Are there any PASS items I was too lenient about?"
- "Am I missing any additional blockers the user should know about?"
- "Can I provide a minimal path to PASS — the specific 3 things that must change?"
- "Is the fail condition resolvable, or does it indicate a deeper design problem?"
**Step 2 — Answer each question** independently.
Do NOT reference the draft verdict text — re-check specific files or ask the user.
**Step 3 — Revise if needed:**
- If any answer reveals a missed blocker → upgrade verdict (PASS→CONCERNS or CONCERNS→FAIL)
- If any answer reveals an over-stated blocker → downgrade only if citing specific evidence
- If answers are consistent → confirm verdict unchanged
**Step 4 — Note the verification** in the final report output:
`Chain-of-Verification: [N] questions checked — verdict [unchanged | revised from X to Y]`
---
## 6. Update Stage on PASS
When the verdict is **PASS** and the user confirms they want to advance:

View file

@ -11,6 +11,8 @@ model: haiku
# Studio Help — What Do I Do Next?
This skill is read-only — it reports findings but writes no files.
This skill figures out exactly where you are in the game development pipeline and
tells you what comes next. It is **lightweight** — not a full audit. For a full
gap analysis, use `/project-stage-detect`.
@ -163,6 +165,8 @@ Approaching **[next phase]** gate → run `/gate-check` when ready.
- If a step has no command (e.g. "Implement Stories"), explain what to do instead of showing a slash command
- For MANUAL steps, ask the user: "I can't tell if [step] is done — has it been completed?"
Verdict: **COMPLETE** — next steps identified.
---
## Step 8: Gate Warning (if close)

View file

@ -5,62 +5,90 @@ argument-hint: "[bug-id or description]"
user-invocable: true
allowed-tools: Read, Glob, Grep, Write, Edit, Bash, Task
---
When this skill is invoked:
> **Explicit invocation only**: This skill should only run when the user explicitly requests it with `/hotfix`. Do not auto-invoke based on context matching.
1. **Assess the emergency** — Read the bug description or ID. Determine severity:
- **S1 (Critical)**: Game unplayable, data loss, security vulnerability — hotfix immediately
- **S2 (Major)**: Significant feature broken, workaround exists — hotfix within 24 hours
- If severity is S3 or lower, recommend using the normal bug fix workflow instead
## Phase 1: Assess Severity
2. **Create the hotfix record** at `production/hotfixes/hotfix-[date]-[short-name].md`:
Read the bug description or ID. Determine severity:
```markdown
## Hotfix: [Short Description]
Date: [Date]
Severity: [S1/S2]
Reporter: [Who found it]
Status: IN PROGRESS
- **S1 (Critical)**: Game unplayable, data loss, security vulnerability — hotfix immediately
- **S2 (Major)**: Significant feature broken, workaround exists — hotfix within 24 hours
- If severity is S3 or lower, recommend using the normal bug fix workflow instead and stop.
### Problem
[Clear description of what is broken and the player impact]
---
### Root Cause
[To be filled during investigation]
## Phase 2: Create Hotfix Record
### Fix
[To be filled during implementation]
Draft the hotfix record:
### Testing
[What was tested and how]
```markdown
## Hotfix: [Short Description]
Date: [Date]
Severity: [S1/S2]
Reporter: [Who found it]
Status: IN PROGRESS
### Approvals
- [ ] Fix reviewed by lead-programmer
- [ ] Regression test passed (qa-tester)
- [ ] Release approved (producer)
### Problem
[Clear description of what is broken and the player impact]
### Rollback Plan
[How to revert if the fix causes new issues]
```
### Root Cause
[To be filled during investigation]
3. **Create the hotfix branch** (if git is initialized):
```
git checkout -b hotfix/[short-name] [release-tag-or-main]
```
### Fix
[To be filled during implementation]
4. **Investigate and implement the fix** — Focus on the minimal change that resolves the issue. Do NOT refactor, clean up, or add features alongside the hotfix.
### Testing
[What was tested and how]
5. **Validate the fix** — Run targeted tests for the affected system. Check for regressions in adjacent systems.
### Approvals
- [ ] Fix reviewed by lead-programmer
- [ ] Regression test passed (qa-tester)
- [ ] Release approved (producer)
6. **Update the hotfix record** with root cause, fix details, and test results.
### Rollback Plan
[How to revert if the fix causes new issues]
```
6b. **Collect approvals** — Use the Task tool to request sign-off:
- `subagent_type: lead-programmer` — Review the fix for correctness and side effects
- `subagent_type: qa-tester` — Run targeted regression tests on the affected system
- `subagent_type: producer` — Approve deployment timing and communication plan
Ask: "May I write this to `production/hotfixes/hotfix-[date]-[short-name].md`?"
7. **Output a summary** with: severity, root cause, fix applied, testing status, and what approvals are still needed before deployment.
If yes, write the file, creating the directory if needed.
---
## Phase 3: Create Hotfix Branch
If git is initialized, create the hotfix branch:
```
git checkout -b hotfix/[short-name] [release-tag-or-main]
```
---
## Phase 4: Investigate and Implement
Focus on the minimal change that resolves the issue. Do NOT refactor, clean up, or add features alongside the hotfix.
Validate the fix by running targeted tests for the affected system. Check for regressions in adjacent systems.
Update the hotfix record with root cause, fix details, and test results.
---
## Phase 5: Collect Approvals
Use the Task tool to request sign-off in parallel:
- `subagent_type: lead-programmer` — Review the fix for correctness and side effects
- `subagent_type: qa-tester` — Run targeted regression tests on the affected system
- `subagent_type: producer` — Approve deployment timing and communication plan
---
## Phase 6: Summary
Output a summary with: severity, root cause, fix applied, testing status, and what approvals are still needed before deployment.
### Rules
- Hotfixes must be the MINIMUM change to fix the issue — no cleanup, no refactoring, no "while we're here" changes
@ -68,3 +96,15 @@ When this skill is invoked:
- Hotfix branches merge to BOTH the release branch AND the development branch
- All hotfixes require a post-incident review within 48 hours
- If the fix is complex enough to need more than 4 hours, escalate to technical-director for a scope decision
---
## Phase 7: Next Steps
Verdict: **COMPLETE** — hotfix applied and backported.
After the fix is approved and merged:
- Run `/smoke-check` to verify critical paths are intact.
- Run `/code-review` on the hotfix diff before merging to main.
- Schedule a post-incident review within 48 hours.

View file

@ -6,26 +6,33 @@ user-invocable: true
allowed-tools: Read, Glob, Grep, Write
---
When this skill is invoked:
> **Explicit invocation only**: This skill should only run when the user explicitly requests it with `/launch-checklist`. Do not auto-invoke based on context matching.
1. **Read the argument** for the launch date or `dry-run` mode. Dry-run mode
generates the checklist without creating sign-off entries.
## Phase 1: Parse Arguments
2. **Gather project context**:
- Read `CLAUDE.md` for tech stack, target platforms, and team structure
- Read the latest milestone in `production/milestones/`
- Read any existing release checklist in `production/releases/`
- Read the content calendar in `design/live-ops/content-calendar.md` if it exists
Read the argument for the launch date or `dry-run` mode. Dry-run mode generates the checklist without creating sign-off entries or writing files.
3. **Scan codebase health**:
- Count `TODO`, `FIXME`, `HACK` comments and their locations
- Check for any `console.log`, `print()`, or debug output left in production code
- Check for placeholder assets (search for `placeholder`, `temp_`, `WIP_`)
- Check for hardcoded test/dev values (localhost, test credentials, debug flags)
---
4. **Generate the launch checklist**:
## Phase 2: Gather Project Context
- Read `CLAUDE.md` for tech stack, target platforms, and team structure
- Read the latest milestone in `production/milestones/`
- Read any existing release checklist in `production/releases/`
- Read the content calendar in `design/live-ops/content-calendar.md` if it exists
---
## Phase 3: Scan Codebase Health
- Count `TODO`, `FIXME`, `HACK` comments and their locations
- Check for any `console.log`, `print()`, or debug output left in production code
- Check for placeholder assets (search for `placeholder`, `temp_`, `WIP_`)
- Check for hardcoded test/dev values (localhost, test credentials, debug flags)
---
## Phase 4: Generate the Launch Checklist
```markdown
# Launch Checklist: [Game Title]
@ -214,8 +221,19 @@ Generated: [Date]
- [ ] Release Manager — Build and deployment readiness
```
5. **Save the checklist** to
`production/releases/launch-checklist-[date].md`, creating directories as needed.
---
6. **Output a summary** to the user: total items, blocking items count,
conditional items count, departments with incomplete sections, and the file path.
## Phase 5: Save Checklist
Present the completed checklist and summary to the user (total items, blocking items count, conditional items count, departments with incomplete sections).
If not in dry-run mode, ask: "May I write this to `production/releases/launch-checklist-[date].md`?"
If yes, write the file, creating directories as needed.
---
## Phase 6: Next Steps
- Run `/gate-check` to get a formal PASS/CONCERNS/FAIL verdict before launch.
- Coordinate sign-offs via `/team-release`.

View file

@ -6,62 +6,98 @@ user-invocable: true
agent: localization-lead
allowed-tools: Read, Glob, Grep, Write, Bash
---
When this skill is invoked:
1. **Parse the subcommand** from the argument:
- `scan` — Scan for localization issues (hardcoded strings, missing keys)
- `extract` — Extract new strings and generate/update string tables
- `validate` — Validate existing translations for completeness and format
- `status` — Report overall localization status
## Phase 1: Parse Subcommand
2. **For `scan`**:
- Search `src/` for hardcoded user-facing strings:
- String literals in UI code that are not wrapped in a localization function
- Concatenated strings that should be parameterized
- Strings with positional placeholders (`%s`, `%d`) instead of named ones (`{playerName}`)
- Search for localization anti-patterns:
- Date/time formatting not using locale-aware functions
- Number formatting without locale awareness
- Text embedded in images or textures (flag asset files)
- Strings that assume left-to-right text direction
- Report all findings with file paths and line numbers
Determine the mode from the argument:
3. **For `extract`**:
- Scan all source files for localized string references
- Compare against the existing string table (if any) in `assets/data/`
- Generate new entries for strings that don't have keys yet
- Suggest key names following the convention: `[category].[subcategory].[description]`
- Output a diff of new strings to add to the string table
- `scan` — Scan for localization issues (hardcoded strings, missing keys)
- `extract` — Extract new strings and generate/update string tables
- `validate` — Validate existing translations for completeness and format
- `status` — Report overall localization status
4. **For `validate`**:
- Read all string table files in `assets/data/`
- Check each entry for:
- Missing translations (key exists but no translation for a locale)
- Placeholder mismatches (source has `{name}` but translation is missing it)
- String length violations (exceeds character limits for UI elements)
- Orphaned keys (translation exists but nothing references the key in code)
- Report validation results grouped by locale and severity
If no subcommand is provided, output usage and stop. Verdict: **FAIL** — missing required subcommand.
5. **For `status`**:
- Count total localizable strings
- Per locale: count translated, untranslated, and stale (source changed since translation)
- Generate a coverage matrix:
---
```markdown
## Localization Status
Generated: [Date]
## Phase 2A: Scan Mode
| Locale | Total | Translated | Missing | Stale | Coverage |
|--------|-------|-----------|---------|-------|----------|
| en (source) | [N] | [N] | 0 | 0 | 100% |
| [locale] | [N] | [N] | [N] | [N] | [X]% |
Search `src/` for hardcoded user-facing strings:
### Issues
- [N] hardcoded strings found in source code
- [N] strings exceeding character limits
- [N] placeholder mismatches
- [N] orphaned keys (can be cleaned up)
```
- String literals in UI code not wrapped in a localization function
- Concatenated strings that should be parameterized
- Strings with positional placeholders (`%s`, `%d`) instead of named ones (`{playerName}`)
Search for localization anti-patterns:
- Date/time formatting not using locale-aware functions
- Number formatting without locale awareness
- Text embedded in images or textures (flag asset files)
- Strings that assume left-to-right text direction
Report all findings with file paths and line numbers. This mode is read-only — no files are written.
---
## Phase 2B: Extract Mode
- Scan all source files for localized string references
- Compare against the existing string table (if any) in `assets/data/`
- Generate new entries for strings that don't have keys yet
- Suggest key names following the convention: `[category].[subcategory].[description]`
- Output a diff of new strings to add to the string table
Present the diff to the user. Ask: "May I write these new entries to `assets/data/strings/strings-[locale].json`?"
If yes, write only the diff (new entries), not a full replacement. Verdict: **COMPLETE** — strings extracted and written.
If no, stop here. Verdict: **BLOCKED** — user declined write.
---
## Phase 2C: Validate Mode
- Read all string table files in `assets/data/`
- Check each entry for:
- Missing translations (key exists but no translation for a locale)
- Placeholder mismatches (source has `{name}` but translation is missing it)
- String length violations (exceeds character limits for UI elements)
- Orphaned keys (translation exists but nothing references the key in code)
- Report validation results grouped by locale and severity. This mode is read-only — no files are written.
---
## Phase 2D: Status Mode
- Count total localizable strings
- Per locale: count translated, untranslated, and stale (source changed since translation)
- Generate a coverage matrix:
```markdown
## Localization Status
Generated: [Date]
| Locale | Total | Translated | Missing | Stale | Coverage |
|--------|-------|-----------|---------|-------|----------|
| en (source) | [N] | [N] | 0 | 0 | 100% |
| [locale] | [N] | [N] | [N] | [N] | [X]% |
### Issues
- [N] hardcoded strings found in source code
- [N] strings exceeding character limits
- [N] placeholder mismatches
- [N] orphaned keys (can be cleaned up)
```
This mode is read-only — no files are written.
---
## Phase 3: Next Steps
- If scan found hardcoded strings: run `/localize extract` to begin extracting them.
- If validate found missing translations: share the report with the translation team.
- If approaching launch: run `/asset-audit` to verify all localized assets are present.
### Rules
- English (en) is always the source locale

View file

@ -208,6 +208,9 @@ After writing, update `production/session-state/active.md` with:
- File: design/gdd/systems-index.md
- Next: Design individual system GDDs
**Verdict: COMPLETE** — systems index written to `design/gdd/systems-index.md`.
If the user declined: **Verdict: BLOCKED** — user did not approve the write.
---
## 7. Phase 6: Design Individual Systems (Handoff to /design-system)

View file

@ -6,19 +6,22 @@ user-invocable: true
allowed-tools: Read, Glob, Grep, Write
---
When this skill is invoked:
## Phase 1: Load Milestone Data
1. **Read the milestone definition** from `production/milestones/`.
Read the milestone definition from `production/milestones/`. If the argument is `current`, use the most recently modified milestone file.
2. **Read all sprint reports** for sprints within this milestone from
`production/sprints/`.
Read all sprint reports for sprints within this milestone from `production/sprints/`.
3. **Scan the codebase** for TODO, FIXME, HACK markers that indicate
incomplete work.
---
4. **Check the risk register** at `production/risk-register/`.
## Phase 2: Scan Codebase Health
5. **Generate the milestone review**:
- Scan for `TODO`, `FIXME`, `HACK` markers that indicate incomplete work
- Check the risk register at `production/risk-register/`
---
## Phase 3: Generate the Milestone Review
```markdown
# Milestone Review: [Milestone Name]
@ -89,3 +92,22 @@ When this skill is invoked:
| # | Action | Owner | Deadline |
|---|--------|-------|----------|
```
---
## Phase 4: Save Review
Present the review to the user.
Ask: "May I write this to `production/milestones/[milestone-name]-review.md`?"
If yes, write the file, creating the directory if needed. Verdict: **COMPLETE** — milestone review saved.
If no, stop here. Verdict: **BLOCKED** — user declined write.
---
## Phase 5: Next Steps
- Run `/gate-check` for a formal phase gate verdict if this milestone marks a development phase boundary.
- Run `/sprint-plan` to adjust the next sprint based on the scope recommendations above.

View file

@ -7,23 +7,27 @@ allowed-tools: Read, Glob, Grep, Write
model: haiku
---
When this skill is invoked:
## Phase 1: Load Project Context
1. **Read the CLAUDE.md** for project overview and standards.
Read CLAUDE.md for project overview and standards.
2. **Read the relevant agent definition** from `.claude/agents/` if a specific
role is specified.
Read the relevant agent definition from `.claude/agents/` if a specific role is specified.
3. **Scan the codebase** for the relevant area:
- For programmers: scan `src/` for architecture, patterns, key files
- For designers: scan `design/` for existing design documents
- For narrative: scan `design/narrative/` for world-building and story docs
- For QA: scan `tests/` for existing test coverage
- For production: scan `production/` for current sprint and milestone
---
4. **Read recent changes** (git log if available) to understand current momentum.
## Phase 2: Scan Relevant Area
5. **Generate the onboarding document**:
- For programmers: scan `src/` for architecture, patterns, key files
- For designers: scan `design/` for existing design documents
- For narrative: scan `design/narrative/` for world-building and story docs
- For QA: scan `tests/` for existing test coverage
- For production: scan `production/` for current sprint and milestone
Read recent changes (git log if available) to understand current momentum.
---
## Phase 3: Generate Onboarding Document
```markdown
# Onboarding: [Role/Area]
@ -70,3 +74,23 @@ When this skill is invoked:
## Questions to Ask
[Questions the new contributor should ask to get fully oriented]
```
---
## Phase 4: Save Document
Present the onboarding document to the user.
Ask: "May I write this to `production/onboarding/onboard-[role]-[date].md`?"
If yes, write the file, creating the directory if needed.
---
## Phase 5: Next Steps
Verdict: **COMPLETE** — onboarding document generated.
- Share the onboarding doc with the new contributor before their first session.
- Run `/sprint-status` to show the new contributor current progress.
- Run `/help` if the contributor needs guidance on what to work on next.

View file

@ -8,36 +8,47 @@ model: haiku
agent: community-manager
---
When this skill is invoked:
## Phase 1: Parse Arguments
1. **Parse the arguments**:
- `version`: the release version to generate notes for (e.g., `1.2.0`)
- `--style`: output style — `brief` (bullet points), `detailed` (with context),
`full` (with developer commentary). Default: `detailed`.
- `version`: the release version to generate notes for (e.g., `1.2.0`)
- `--style`: output style — `brief` (bullet points), `detailed` (with context), `full` (with developer commentary). Default: `detailed`.
2. **Gather change data from multiple sources**:
- Read the internal changelog at `production/releases/[version]/changelog.md` if it exists
- Run `git log` between the previous release tag and current tag/HEAD
- Read sprint retrospectives in `production/sprints/` for context
- Read any balance change documents in `design/balance/`
- Read bug fix records from QA if available
If no version is provided, ask the user before proceeding.
3. **Categorize all changes** into player-facing categories:
- **New Content**: new features, maps, characters, items, modes
- **Gameplay Changes**: balance adjustments, mechanic changes, progression changes
- **Quality of Life**: UI improvements, convenience features, accessibility
- **Bug Fixes**: grouped by system (combat, UI, networking, etc.)
- **Performance**: optimization improvements players might notice
- **Known Issues**: transparency about unresolved problems
---
4. **Translate developer language to player language**:
- "Refactored damage calculation pipeline" → "Improved hit detection accuracy"
- "Fixed null reference in inventory manager" → "Fixed a crash when opening inventory"
- "Reduced GC allocations in combat loop" → "Improved combat performance"
- Remove purely internal changes that don't affect players
- Preserve specific numbers for balance changes (damage: 50 → 45)
## Phase 2: Gather Change Data
5. **Generate the patch notes** using the appropriate style:
- Read the internal changelog at `production/releases/[version]/changelog.md` if it exists
- Run `git log` between the previous release tag and current tag/HEAD
- Read sprint retrospectives in `production/sprints/` for context
- Read any balance change documents in `design/balance/`
- Read bug fix records from QA if available
---
## Phase 3: Categorize and Translate
Categorize all changes into player-facing categories:
- **New Content**: new features, maps, characters, items, modes
- **Gameplay Changes**: balance adjustments, mechanic changes, progression changes
- **Quality of Life**: UI improvements, convenience features, accessibility
- **Bug Fixes**: grouped by system (combat, UI, networking, etc.)
- **Performance**: optimization improvements players might notice
- **Known Issues**: transparency about unresolved problems
Translate developer language to player language:
- "Refactored damage calculation pipeline" → "Improved hit detection accuracy"
- "Fixed null reference in inventory manager" → "Fixed a crash when opening inventory"
- "Reduced GC allocations in combat loop" → "Improved combat performance"
- Remove purely internal changes that don't affect players
- Preserve specific numbers for balance changes (damage: 50 → 45)
---
## Phase 4: Generate Patch Notes
### Brief Style
```markdown
@ -108,15 +119,33 @@ Includes everything from Detailed, plus:
> what the team learned. Written in first-person team voice.]
```
6. **Review the output** for:
- No internal jargon (replace technical terms with player-friendly language)
- No references to internal systems, tickets, or sprint numbers
- Balance changes include before/after values
- Bug fixes describe the player experience, not the technical cause
- Tone matches the game's voice (adjust formality based on game style)
---
7. **Save the patch notes** to `production/releases/[version]/patch-notes.md`,
creating the directory if needed.
## Phase 5: Review Output
8. **Output to the user**: the complete patch notes, the file path, a count of
changes by category, and any internal changes that were excluded (for review).
Check the generated notes for:
- No internal jargon (replace technical terms with player-friendly language)
- No references to internal systems, tickets, or sprint numbers
- Balance changes include before/after values
- Bug fixes describe the player experience, not the technical cause
- Tone matches the game's voice (adjust formality based on game style)
---
## Phase 6: Save Patch Notes
Present the completed patch notes to the user along with: a count of changes by category, and any internal changes that were excluded (for review).
Ask: "May I write this to `production/releases/[version]/patch-notes.md`?"
If yes, write the file, creating the directory if needed.
---
## Phase 7: Next Steps
Verdict: **COMPLETE** — patch notes generated and saved.
- Run `/release-checklist` to verify all other release gates are met before publishing.
- Share the patch notes draft with the community-manager for tone review before posting publicly.

View file

@ -6,110 +6,120 @@ user-invocable: true
agent: performance-analyst
allowed-tools: Read, Glob, Grep, Bash
---
When this skill is invoked:
1. **Determine scope** from the argument:
- If a system name: focus profiling on that specific system
- If `full`: run a comprehensive profile across all systems
## Phase 1: Determine Scope
2. **Read performance budgets** — Check for existing performance targets in design docs or CLAUDE.md:
- Target FPS (e.g., 60fps = 16.67ms frame budget)
- Memory budget (total and per-system)
- Load time targets
- Draw call budgets
- Network bandwidth limits (if multiplayer)
Read the argument:
3. **Analyze the codebase** for common performance issues:
- System name → focus profiling on that specific system
- `full` → run a comprehensive profile across all systems
**CPU Profiling Targets**:
- `_process()` / `Update()` / `Tick()` functions — list all and estimate cost
- Nested loops over large collections
- String operations in hot paths
- Allocation patterns in per-frame code
- Unoptimized search/sort over game entities
- Expensive physics queries (raycasts, overlaps) every frame
---
**Memory Profiling Targets**:
- Large data structures and their growth patterns
- Texture/asset memory footprint estimates
- Object pool vs instantiate/destroy patterns
- Leaked references (objects that should be freed but aren't)
- Cache sizes and eviction policies
## Phase 2: Load Performance Budgets
**Rendering Targets** (if applicable):
- Draw call estimates
- Overdraw from overlapping transparent objects
- Shader complexity
- Unoptimized particle systems
- Missing LODs or occlusion culling
Check for existing performance targets in design docs or CLAUDE.md:
**I/O Targets**:
- Save/load performance
- Asset loading patterns (sync vs async)
- Network message frequency and size
- Target FPS (e.g., 60fps = 16.67ms frame budget)
- Memory budget (total and per-system)
- Load time targets
- Draw call budgets
- Network bandwidth limits (if multiplayer)
4. **Generate the profiling report**:
---
```markdown
## Performance Profile: [System or Full]
Generated: [Date]
## Phase 3: Analyze Codebase
### Performance Budgets
| Metric | Budget | Estimated Current | Status |
|--------|--------|-------------------|--------|
| Frame time | [16.67ms] | [estimate] | [OK/WARNING/OVER] |
| Memory | [target] | [estimate] | [OK/WARNING/OVER] |
| Load time | [target] | [estimate] | [OK/WARNING/OVER] |
| Draw calls | [target] | [estimate] | [OK/WARNING/OVER] |
**CPU Profiling Targets:**
- `_process()` / `Update()` / `Tick()` functions — list all and estimate cost
- Nested loops over large collections
- String operations in hot paths
- Allocation patterns in per-frame code
- Unoptimized search/sort over game entities
- Expensive physics queries (raycasts, overlaps) every frame
### Hotspots Identified
| # | Location | Issue | Estimated Impact | Fix Effort |
|---|----------|-------|------------------|------------|
| 1 | [file:line] | [description] | [High/Med/Low] | [S/M/L] |
| 2 | [file:line] | [description] | [High/Med/Low] | [S/M/L] |
**Memory Profiling Targets:**
- Large data structures and their growth patterns
- Texture/asset memory footprint estimates
- Object pool vs instantiate/destroy patterns
- Leaked references (objects that should be freed but aren't)
- Cache sizes and eviction policies
### Optimization Recommendations (Priority Order)
1. **[Title]** — [Description of the optimization]
- Location: [file:line]
- Expected gain: [estimate]
- Risk: [Low/Med/High]
- Approach: [How to implement]
**Rendering Targets (if applicable):**
- Draw call estimates
- Overdraw from overlapping transparent objects
- Shader complexity
- Unoptimized particle systems
- Missing LODs or occlusion culling
### Quick Wins (< 1 hour each)
- [Simple optimization 1]
- [Simple optimization 2]
**I/O Targets:**
- Save/load performance
- Asset loading patterns (sync vs async)
- Network message frequency and size
### Requires Investigation
- [Area that needs actual runtime profiling to determine impact]
```
---
5. **Output the report** with a summary: top 3 hotspots, estimated headroom vs budget, and recommended next action.
## Phase 4: Generate Profiling Report
6. **Scope & Timeline Decision** — activate this phase only if any hotspot has Fix Effort rated M or L.
```markdown
## Performance Profile: [System or Full]
Generated: [Date]
Present a summary of the significant-effort items:
### Performance Budgets
| Metric | Budget | Estimated Current | Status |
|--------|--------|-------------------|--------|
| Frame time | [16.67ms] | [estimate] | [OK/WARNING/OVER] |
| Memory | [target] | [estimate] | [OK/WARNING/OVER] |
| Load time | [target] | [estimate] | [OK/WARNING/OVER] |
| Draw calls | [target] | [estimate] | [OK/WARNING/OVER] |
> "The following optimizations require significant effort: [list titles and effort ratings from the Hotspots table]"
### Hotspots Identified
| # | Location | Issue | Estimated Impact | Fix Effort |
|---|----------|-------|------------------|------------|
For each M/L item, ask the user to choose one of:
### Optimization Recommendations (Priority Order)
1. **[Title]** — [Description]
- Location: [file:line]
- Expected gain: [estimate]
- Risk: [Low/Med/High]
- Approach: [How to implement]
- **A) Implement the optimization** (estimated effort: [S/M/L] — proceed with fix now or schedule it)
- **B) Reduce feature scope to avoid the bottleneck** (run `/scope-check [feature]` to analyze the trade-offs)
- **C) Accept the performance hit and defer to Polish phase** (log it as a known issue)
- **D) Escalate to technical-director for an architectural decision** (the bottleneck warrants an ADR)
### Quick Wins (< 1 hour each)
- [Simple optimization 1]
For choice B, remind the user:
> "Run `/scope-check [feature]` to see what simplifications are available without sacrificing player experience."
### Requires Investigation
- [Area that needs actual runtime profiling to confirm impact]
```
For choice D, note:
> "A bottleneck requiring architectural change should become a new Architecture Decision Record. Run `/architecture-decision` to capture the decision and its trade-offs."
Output the report with a summary: top 3 hotspots, estimated headroom vs budget, and recommended next action.
If multiple items are deferred to Polish (choice C), record them in the report under a `### Deferred to Polish` section so they are not lost.
---
## Phase 5: Scope and Timeline Decision
Activate this phase only if any hotspot has Fix Effort rated M or L.
Present significant-effort items and ask the user to choose for each:
- **A) Implement the optimization** (proceed with fix now or schedule it)
- **B) Reduce feature scope** (run `/scope-check [feature]` to analyze trade-offs)
- **C) Accept the performance hit and defer to Polish phase** (log as known issue)
- **D) Escalate to technical-director for an architectural decision** (run `/architecture-decision`)
If multiple items are deferred to Polish (choice C), record them under `### Deferred to Polish`.
This skill is read-only — no files are written. Verdict: **COMPLETE** — performance profile generated.
---
## Phase 6: Next Steps
- If bottlenecks require architectural change: run `/architecture-decision`.
- If scope reduction is needed: run `/scope-check [feature]`.
- To schedule optimizations: run `/sprint-plan update`.
### Rules
- Never optimize without measuring first — gut feelings about performance are unreliable
- Recommendations must include estimated impact — "make it faster" is not actionable
- Profile on target hardware, not just development machines
- Distinguish between CPU-bound, GPU-bound, and I/O-bound bottlenecks
- Consider worst-case scenarios (maximum entities, lowest spec hardware, worst network conditions)
- Static analysis (this skill) identifies candidates; runtime profiling confirms

View file

@ -6,7 +6,18 @@ user-invocable: true
allowed-tools: Read, Glob, Grep, Write
---
When invoked with `new`, generate this template:
## Phase 1: Parse Arguments
Determine the mode:
- `new` → generate a blank playtest report template
- `analyze [path]` → read raw notes and fill in the template with structured findings
---
## Phase 2A: New Template Mode
Generate this template and output it to the user:
```markdown
# Playtest Report
@ -32,11 +43,9 @@ When invoked with `new`, generate this template:
## Gameplay Flow
### What worked well
- [Observation 1]
- [Observation 2]
### Pain points
- [Issue 1 -- Severity: High/Medium/Low]
- [Issue 2 -- Severity: High/Medium/Low]
### Confusion points
- [Where the player was confused and why]
@ -72,38 +81,44 @@ When invoked with `new`, generate this template:
3. [Third priority]
```
When invoked with `analyze`, read the raw notes, cross-reference with existing
design documents, and fill in the template above with structured findings.
Flag any playtest observations that conflict with design intent.
---
After generating or analyzing a report, run the **Action Routing** phase:
## Phase 2B: Analyze Mode
**Action Routing**
Read the raw notes at the provided path. Cross-reference with existing design documents. Fill in the template above with structured findings. Flag any playtest observations that conflict with design intent.
Categorize all findings from the report into the four buckets below (a single
finding may appear in more than one bucket if appropriate):
---
- **Design changes needed** — fun issues, player confusion, broken mechanics,
observations that conflict with the GDD's intended experience
- **Balance adjustments** — numbers feel wrong, difficulty too spiked or too
flat, economy or progression feedback
## Phase 3: Action Routing
Categorize all findings into four buckets:
- **Design changes needed** — fun issues, player confusion, broken mechanics, observations that conflict with the GDD's intended experience
- **Balance adjustments** — numbers feel wrong, difficulty too spiked or too flat
- **Bug reports** — clear implementation defects that are reproducible
- **Polish items** — not blocking progress, but friction or feel issues noted
for later
- **Polish items** — not blocking progress, but friction or feel issues for later
Present the categorized list, then provide the routing guidance for each
non-empty bucket:
Present the categorized list, then route:
- **Design changes:** "These findings suggest GDD revisions. Run
`/propagate-design-change [path]` on the affected design document to find
downstream impacts before making changes."
- **Balance adjustments:** "Run `/balance-check [system]` to verify the full
balance picture before tuning individual values."
- **Bugs:** "Use `/bug-report` to formally track these so they are not lost
between sessions."
- **Polish items:** "No immediate action required. Consider adding these to the
polish backlog in `production/` when the team reaches that phase."
- **Design changes:** "Run `/propagate-design-change [path]` on the affected design document to find downstream impacts before making changes."
- **Balance adjustments:** "Run `/balance-check [system]` to verify the full balance picture before tuning values."
- **Bugs:** "Use `/bug-report` to formally track these."
- **Polish items:** "Add to the polish backlog in `production/` when the team reaches that phase."
Finally, ask:
---
> "Which category would you like to act on first?"
## Phase 4: Save Report
Ask: "May I write this playtest report to `production/qa/playtests/playtest-[date]-[tester].md`?"
If yes, write the file, creating the directory if needed.
---
## Phase 5: Next Steps
Verdict: **COMPLETE** — playtest report generated.
- Act on the highest-priority finding category first.
- After addressing design changes: re-run `/design-review` on the updated GDD.
- After fixing bugs: re-run `/bug-triage` to update priorities.

View file

@ -3,7 +3,7 @@ name: project-stage-detect
description: "Automatically analyze project state, detect stage, identify gaps, and recommend next steps based on existing artifacts. Use when user asks 'where are we in development', 'what stage are we in', 'full project audit'."
argument-hint: "[optional: role filter like 'programmer' or 'designer']"
user-invocable: true
allowed-tools: Read, Glob, Grep, Bash
allowed-tools: Read, Glob, Grep, Bash, Write
context: fork
model: haiku
agent: Explore

View file

@ -190,6 +190,9 @@ The document contains:
- Resolution decisions made in step 7
- List of ADRs that need to be written or updated
If user approved: Verdict: **COMPLETE** — change impact report saved.
If user declined: Verdict: **BLOCKED** — user declined write.
---
## 10. Follow-Up Actions

View file

@ -9,42 +9,57 @@ agent: prototyper
isolation: worktree
---
When this skill is invoked:
## Phase 1: Define the Question
1. **Read the concept description** from the argument. Identify the core
question this prototype must answer. If the concept is vague, state the
question explicitly before proceeding.
Read the concept description from the argument. Identify the core question this prototype must answer. If the concept is vague, state the question explicitly before proceeding — a prototype without a clear question wastes time.
2. **Read CLAUDE.md** for project context and the current tech stack. Understand
what engine, language, and frameworks are in use so the prototype is built
with compatible tooling.
---
3. **Create a prototype plan**: Define in 3-5 bullet points what the minimum
viable prototype looks like. What is the core question? What is the absolute
minimum code needed to answer it? What can be skipped?
## Phase 2: Load Project Context
4. **Create the prototype directory**: `prototypes/[concept-name]/` where
`[concept-name]` is a short, kebab-case identifier derived from the concept.
Read `CLAUDE.md` for project context and the current tech stack. Understand what engine, language, and frameworks are in use so the prototype is built with compatible tooling.
5. **Implement the prototype** in the isolated directory. Every file must begin
with:
```
// PROTOTYPE - NOT FOR PRODUCTION
// Question: [Core question being tested]
// Date: [Current date]
```
Standards are intentionally relaxed:
- Hardcode values freely
- Use placeholder assets
- Skip error handling
- Use the simplest approach that works
- Copy code rather than importing from production
---
6. **Test the concept**: Run the prototype. Observe behavior. Collect any
measurable data (frame times, interaction counts, feel assessments).
## Phase 3: Plan the Prototype
7. **Generate the Prototype Report** and save it to
`prototypes/[concept-name]/REPORT.md`:
Define in 3-5 bullet points what the minimum viable prototype looks like:
- What is the core question?
- What is the absolute minimum code needed to answer it?
- What can be skipped (error handling, polish, architecture)?
Present this plan to the user before building. Ask for confirmation if scope seems unclear.
---
## Phase 4: Implement
Ask: "May I create the prototype directory at `prototypes/[concept-name]/` and begin implementation?"
If yes, create the directory. Every file must begin with:
```
// PROTOTYPE - NOT FOR PRODUCTION
// Question: [Core question being tested]
// Date: [Current date]
```
Standards are intentionally relaxed:
- Hardcode values freely
- Use placeholder assets
- Skip error handling
- Use the simplest approach that works
- Copy code rather than importing from production
Run the prototype. Observe behavior. Collect any measurable data (frame times, interaction counts, feel assessments).
---
## Phase 5: Generate Prototype Report
Draft the report:
```markdown
## Prototype Report: [Concept Name]
@ -87,32 +102,46 @@ When this skill is invoked:
[Discoveries that affect other systems or future work]
```
8. **Delegate the decision to the creative-director**. Spawn a `creative-director`
subagent via Task and provide:
- The full REPORT.md content
- The original design question
- Any game pillars or concept doc from `design/gdd/` that are relevant
Ask: "May I write this report to `prototypes/[concept-name]/REPORT.md`?"
Ask the creative-director to:
- Evaluate the prototype result against the game's creative vision and pillars
- Confirm, modify, or override the prototyper's PROCEED / PIVOT / KILL recommendation
- If PROCEED: identify any creative constraints for the production implementation
- If PIVOT: specify which direction aligns better with the pillars
- If KILL: note whether the underlying player need should be addressed differently
If yes, write the file.
The creative-director's decision is final. Update the REPORT.md `Recommendation`
section with the creative-director's verdict if it differs from the prototyper's.
---
9. **Output a summary** to the user with: the core question, the result, the
prototyper's initial recommendation, and the creative-director's final decision.
Link to the full report at `prototypes/[concept-name]/REPORT.md`.
## Phase 6: Creative Director Review
Delegate the decision to the creative-director. Spawn a `creative-director` subagent via Task and provide:
- The full REPORT.md content
- The original design question
- Any game pillars or concept doc from `design/gdd/` that are relevant
Ask the creative-director to:
- Evaluate the prototype result against the game's creative vision and pillars
- Confirm, modify, or override the prototyper's PROCEED / PIVOT / KILL recommendation
- If PROCEED: identify any creative constraints for the production implementation
- If PIVOT: specify which direction aligns better with the pillars
- If KILL: note whether the underlying player need should be addressed differently
The creative-director's decision is final. Update the REPORT.md `Recommendation` section with the creative-director's verdict if it differs from the prototyper's.
---
## Phase 7: Summary and Next Steps
Output a summary to the user: the core question, the result, the prototyper's initial recommendation, and the creative-director's final decision. Link to the full report at `prototypes/[concept-name]/REPORT.md`.
If **PROCEED**: run `/design-system` to begin the production GDD for this mechanic, or `/architecture-decision` to record key technical decisions before implementation.
If **PIVOT** or **KILL**: no further action needed — the prototype report is the deliverable.
Verdict: **COMPLETE** — prototype finished. Recommendation is PROCEED, PIVOT, or KILL based on findings above.
### Important Constraints
- Prototype code must NEVER import from production source files
- Production code must NEVER import from prototype directories
- If the recommendation is PROCEED, the production implementation must be
written from scratch -- prototype code is not refactored into production
- If the recommendation is PROCEED, the production implementation must be written from scratch — prototype code is not refactored into production
- Total prototype effort should be timeboxed to 1-3 days equivalent of work
- If the prototype scope starts growing, stop and reassess whether the
question can be simplified
- If the prototype scope starts growing, stop and reassess whether the question can be simplified

View file

@ -234,6 +234,8 @@ After writing (if approved):
- If coverage drift detected: "Regression suite may be drifting. Consider
running `/regression-suite audit` at the next sprint boundary."
Verdict: **COMPLETE** — regression suite updated. (If user declined write: Verdict: **BLOCKED**.)
---
## Collaborative Protocol

View file

@ -6,29 +6,35 @@ user-invocable: true
allowed-tools: Read, Glob, Grep, Write
---
When this skill is invoked:
> **Explicit invocation only**: This skill should only run when the user explicitly requests it with `/release-checklist`. Do not auto-invoke based on context matching.
1. **Read the argument** for the target platform (`pc`, `console`, `mobile`,
or `all`). If no platform is specified, default to `all`.
## Phase 1: Parse Arguments
2. **Read CLAUDE.md** for project context, version information, and platform
targets.
Read the argument for the target platform (`pc`, `console`, `mobile`, or `all`). If no platform is specified, default to `all`.
3. **Read the current milestone** from `production/milestones/` to understand
what features and content should be included in this release.
---
4. **Scan the codebase** for outstanding issues:
- Count `TODO` comments
- Count `FIXME` comments
- Count `HACK` comments
- Note their locations and severity
## Phase 2: Load Project Context
5. **Check for test results** in any test output directories or CI logs if
available.
- Read `CLAUDE.md` for project context, version information, and platform targets.
- Read the current milestone from `production/milestones/` to understand what features and content should be included in this release.
6. **Generate the release checklist**:
---
## Phase 3: Scan Codebase
Scan for outstanding issues:
- Count `TODO` comments
- Count `FIXME` comments
- Count `HACK` comments
- Note their locations and severity
Check for test results in any test output directories or CI logs if available.
---
## Phase 4: Generate the Release Checklist
```markdown
## Release Checklist: [Version] -- [Platform]
@ -68,9 +74,9 @@ Generated: [Date]
- [ ] Credits complete and accurate
```
7. **Add platform-specific sections** based on the argument:
Add platform-specific sections based on the argument:
For `pc`:
**For `pc`:**
```markdown
### Platform Requirements: PC
- [ ] Minimum and recommended specs verified and documented
@ -85,7 +91,7 @@ For `pc`:
- [ ] Steam Deck compatibility verified (if targeting)
```
For `console`:
**For `console`:**
```markdown
### Platform Requirements: Console
- [ ] TRC/TCR/Lotcheck requirements checklist complete
@ -99,7 +105,7 @@ For `console`:
- [ ] First-party certification submission prepared
```
For `mobile`:
**For `mobile`:**
```markdown
### Platform Requirements: Mobile
- [ ] App store guidelines compliance verified
@ -114,8 +120,7 @@ For `mobile`:
- [ ] App size within store limits
```
8. **Add store and launch sections**:
**Store and launch sections (all platforms):**
```markdown
### Store / Distribution
- [ ] Store page metadata complete and proofread
@ -158,9 +163,19 @@ resolution and estimated time to address them.]
- [ ] Creative Director
```
9. **Save the checklist** to
`production/releases/release-checklist-[version].md`, creating the
directory if it does not exist.
---
10. **Output a summary** to the user with: total checklist items, number of
known blockers (FIXME/HACK counts, known bugs), and the file path.
## Phase 5: Save Checklist
Present the checklist to the user with: total checklist items, number of known blockers (FIXME/HACK counts, known bugs).
Ask: "May I write this to `production/releases/release-checklist-[version].md`?"
If yes, write the file, creating the directory if needed.
---
## Phase 6: Next Steps
- Run `/gate-check` for a formal phase gate verdict before proceeding to release.
- Coordinate final sign-offs via `/team-release`.

View file

@ -8,41 +8,50 @@ context: |
!git log --oneline --since="2 weeks ago" 2>/dev/null
---
When this skill is invoked:
## Phase 1: Parse Arguments
1. **Read the argument** to determine whether this is a sprint retrospective
(`sprint-N`) or a milestone retrospective (`milestone-name`).
Determine whether this is a sprint retrospective (`sprint-N`) or a milestone retrospective (`milestone-name`).
2. **Read the sprint or milestone plan** from the appropriate location:
- Sprint plans: `production/sprints/`
- Milestone definitions: `production/milestones/`
---
Extract: planned tasks, estimated effort, owners, and goals.
## Phase 2: Load Sprint or Milestone Data
3. **Read the git log** for the period covered by the sprint or milestone to
understand what was actually committed and when.
Read the sprint or milestone plan from the appropriate location:
4. **Scan for completed and incomplete tasks** by comparing the plan against
actual deliverables. Check for:
- Tasks completed as planned
- Tasks completed but modified from the plan
- Tasks carried over (not completed)
- Tasks added mid-sprint (unplanned work)
- Tasks removed or descoped
- Sprint plans: `production/sprints/`
- Milestone definitions: `production/milestones/`
5. **Scan the codebase for TODO/FIXME trends**:
- Count current TODO/FIXME/HACK comments
- Compare to previous sprint counts if available (check previous
retrospectives)
- Note whether technical debt is growing or shrinking
Extract: planned tasks, estimated effort, owners, and goals.
6. **Read previous retrospectives** (if any) from `production/sprints/` or
`production/milestones/` to check:
- Were previous action items addressed?
- Are the same problems recurring?
- How has velocity trended?
Read the git log for the period covered by the sprint or milestone to understand what was actually committed and when.
7. **Generate the retrospective**:
---
## Phase 3: Analyze Completion and Trends
Scan for completed and incomplete tasks by comparing the plan against actual deliverables. Check for:
- Tasks completed as planned
- Tasks completed but modified from the plan
- Tasks carried over (not completed)
- Tasks added mid-sprint (unplanned work)
- Tasks removed or descoped
Scan the codebase for TODO/FIXME trends:
- Count current TODO/FIXME/HACK comments
- Compare to previous sprint counts if available (check previous retrospectives)
- Note whether technical debt is growing or shrinking
Read previous retrospectives (if any) from `production/sprints/` or `production/milestones/` to check:
- Were previous action items addressed?
- Are the same problems recurring?
- How has velocity trended?
---
## Phase 4: Generate the Retrospective
```markdown
## Retrospective: [Sprint N / Milestone Name]
@ -136,23 +145,30 @@ tasks? What adjustment should we apply?]
the single most important thing to change going forward?]
```
8. **Save the retrospective** to the appropriate location:
- Sprint: `production/sprints/sprint-[N]-retrospective.md`
- Milestone: `production/milestones/[milestone-name]-retrospective.md`
---
Create the directory if it does not exist.
## Phase 5: Save Retrospective
9. **Output a summary** to the user with: completion rate, velocity trend
direction, top blocker, and the most important action item.
Present the retrospective and top findings to the user (completion rate, velocity trend, top blocker, most important action item).
Ask: "May I write this to `production/sprints/sprint-[N]-retrospective.md`?" (or the milestone path if applicable)
If yes, write the file, creating the directory if needed. Verdict: **COMPLETE** — retrospective saved.
If no, stop here. Verdict: **BLOCKED** — user declined write.
---
## Phase 6: Next Steps
- Run `/sprint-plan` to incorporate the action items and velocity data into the next sprint.
- If this was a milestone retrospective, run `/gate-check` to formally assess readiness for the next phase.
### Guidelines
- Be honest and specific. Vague retrospectives ("communication could be
better") produce vague improvements. Use data and examples.
- Be honest and specific. Vague retrospectives ("communication could be better") produce vague improvements. Use data and examples.
- Focus on systemic issues, not individual blame.
- Limit action items to 3-5. More than that dilutes focus.
- Every action item must have an owner and a deadline.
- Check whether previous action items were completed. Recurring unaddressed
items are a process smell.
- If this is a milestone retrospective, also evaluate whether the milestone
goals were achieved and what that means for the overall project timeline.
- Check whether previous action items were completed. Recurring unaddressed items are a process smell.
- If this is a milestone retrospective, also evaluate whether the milestone goals were achieved and what that means for the overall project timeline.

View file

@ -21,7 +21,7 @@ appropriate design or architecture documentation. Use this when:
## Workflow
### 1. Parse Arguments
## Phase 1: Parse Arguments
**Format**: `/reverse-document <type> <path>`
@ -42,7 +42,7 @@ appropriate design or architecture documentation. Use this when:
/reverse-document concept prototypes/vehicle-combat
```
### 2. Analyze Implementation
## Phase 2: Analyze Implementation
**Read and understand the code/prototype**:
@ -67,17 +67,17 @@ appropriate design or architecture documentation. Use this when:
- Find technical feasibility insights
- Document player fantasy / feel
### 3. Ask Clarifying Questions (Collaborative Protocol)
## Phase 3: Ask Clarifying Questions
**DO NOT** just describe the code. **ASK** about intent:
**Design questions**:
- "I see a stamina system that depletes during combat. Was this for:
- "I see a [resource] system that depletes during [activity]. Was this for:
- Pacing (prevent spam)?
- Resource management (strategic depth)?
- Or something else?"
- "The stagger mechanic seems central. Is this a core pillar, or supporting feature?"
- "Damage scales exponentially with level. Intentional power fantasy, or needs rebalancing?"
- "The [mechanic] seems central. Is this a core pillar, or supporting feature?"
- "[Value] scales exponentially with [factor]. Intentional design, or needs rebalancing?"
**Architecture questions**:
- "You're using a service locator pattern. Was this chosen for:
@ -90,35 +90,34 @@ appropriate design or architecture documentation. Use this when:
- "The prototype emphasizes stealth over combat. Is that the intended pillar?"
- "Players seem to exploit the grappling hook for speed. Feature or bug?"
### 4. Present Findings
## Phase 4: Present Findings
Before drafting, show what you discovered:
```
I've analyzed src/gameplay/combat/. Here's what I found:
I've analyzed [path]/. Here's what I found:
MECHANICS IMPLEMENTED:
- 3-hit combo system with timing windows
- Guard-break mechanic (heavy attack vs blocking enemy)
- Stamina system (depletes on attack/dodge, regens when idle)
- Stagger system (builds up, triggers vulnerable state)
- [mechanic-a] with [property] (e.g. timing windows, cooldowns)
- [mechanic-b] (e.g. interaction between two states)
- [resource] system (depletes on [action], regens on [condition])
- [state] system (builds up, triggers [effect])
FORMULAS DISCOVERED:
- Damage = Base * (1 + StrengthScaling * Level)
- Stagger buildup = AttackStaggerValue / (Enemy.Poise * 0.5)
- Stamina cost = BaseStaminaCost * (1 - EfficiencyBonus)
- [Output] = [formula using discovered variables]
- [Secondary output] = [formula]
UNCLEAR INTENT AREAS:
1. Stamina system — pacing or resource management?
2. Stagger — core pillar or supporting feature?
3. Damage scaling — power fantasy or needs tuning?
1. [Resource] system — pacing or resource management?
2. [Mechanic] — core pillar or supporting feature?
3. [Value] scaling — intentional design or needs tuning?
Before I draft the design doc, could you clarify these points?
```
Wait for user to clarify intent before drafting.
### 5. Draft Document Using Template
## Phase 5: Draft Document Using Template
Based on type, use appropriate template:
@ -134,41 +133,41 @@ Based on type, use appropriate template:
- Identify **what's missing** (edge cases not handled, gaps in design)
- Flag **follow-up work** (balance tuning, missing features)
### 6. Show Draft and Request Approval
## Phase 6: Show Draft and Request Approval
**Collaborative protocol**:
```
I've drafted the combat system design doc based on your code and clarifications.
I've drafted the [system-name] design doc based on your code and clarifications.
[Show key sections: Overview, Mechanics, Formulas, Design Intent]
ADDITIONS I MADE:
- Documented stamina as "pacing mechanic" per your clarification
- Added edge cases not in code (what if stamina hits 0 mid-combo?)
- Flagged balance concern: exponential damage scaling at high levels
- Documented [mechanic] as "[intent]" per your clarification
- Added edge cases not in code (e.g., what if [resource] hits 0 mid-[action]?)
- Flagged balance concern: [scaling type] scaling at [boundary condition]
SECTIONS MARKED AS INCOMPLETE:
- "Enemy AI interaction with stagger" (not fully implemented yet)
- "Weapon variety and scaling" (only sword implemented so far)
- "[System] interaction with [other-system]" (not fully implemented yet)
- "[Variant or feature]" (only [subset] implemented so far)
May I write this to design/gdd/combat-system.md?
May I write this to design/gdd/[system-name].md?
```
Wait for approval. User may request changes before writing.
### 7. Write Document with Metadata
## Phase 7: Write Document with Metadata
When approved, write the file with special markers:
```markdown
---
status: reverse-documented
source: src/gameplay/combat/
date: 2026-02-13
source: [path/]
date: [today]
verified-by: [User name]
---
# Combat System Design
# [System Name] Design
> **Note**: This document was reverse-engineered from the existing implementation.
> It captures current behavior and clarified design intent. Some sections may be
@ -177,7 +176,7 @@ verified-by: [User name]
[Rest of document...]
```
### 8. Flag Follow-Up Work
## Phase 8: Flag Follow-Up Work
After writing, suggest next steps:
@ -209,41 +208,41 @@ Would you like me to tackle any of these now?
---
## Example Session: Reverse-Document Combat Code
## Example Session: Reverse-Document a System
```
User: /reverse-document design src/gameplay/combat
User: /reverse-document design src/gameplay/[system]
Agent: I'll analyze your combat implementation to create a design doc.
Agent: I'll analyze your [system] implementation to create a design doc.
[Reads code, discovers mechanics, formulas]
Agent: I've analyzed src/gameplay/combat/. Here's what I found:
Agent: I've analyzed src/gameplay/[system]/. Here's what I found:
[Shows mechanics, formulas, unclear areas]
Before drafting, could you clarify:
1. Stamina system intent?
2. Is stagger a core pillar?
3. Damage scaling — intentional or needs tuning?
1. [Resource] system intent?
2. Is [mechanic] a core pillar?
3. [Value] scaling — intentional or needs tuning?
User: 1. Stamina is for pacing, prevent button mashing
2. Stagger is core — I want deliberate combat
User: 1. [Resource] is for pacing, prevent [unwanted behavior]
2. [Mechanic] is core — I want [design intent]
3. Yeah, exponential is too much, should be linear
Agent: Perfect, that clarifies the vision. Let me draft the design doc.
[Shows draft with clarified intent]
May I write this to design/gdd/combat-system.md?
May I write this to design/gdd/[system-name].md?
User: Yes, write it.
Agent: ✅ Written to design/gdd/combat-system.md
Agent: ✅ Written to design/gdd/[system-name].md
✅ Marked as [REVERSE-DOCUMENTED]
✅ Flagged damage scaling for rebalancing
✅ Flagged [value] scaling for rebalancing
Next steps:
- Update damage formula to linear scaling
- Run /balance-check to validate progression curve
- Document stagger as core pillar in game-pillars.md
- Update [formula] to [corrected scaling]
- Run /balance-check to validate [curve]
- Document [mechanic] as core pillar in game-pillars.md
```
---
@ -258,7 +257,7 @@ This skill follows the collaborative design principle:
4. **User Clarifies**: Separate intent from accidents
5. **Draft Document**: Create doc based on reality + intent
6. **Show Draft**: Display key sections, explain additions
7. **Get Approval**: "May I write to [filepath]?"
7. **Get Approval**: "May I write to [filepath]?" On approval: Verdict: **COMPLETE** — document generated. On decline: Verdict: **BLOCKED** — user declined write.
8. **Flag Follow-Up**: Suggest related work, don't auto-execute
**Never assume intent. Always ask before documenting "why".**

View file

@ -64,7 +64,25 @@ modified since the last review report file was written. Show the user which
GDDs are in scope based on summaries before doing any full reads. Only
proceed to L1 for those GDDs plus any GDDs listed in their "Key deps".
### Phase 1b — L1/L2: Full Document Load
### Phase 1b — Registry Pre-Load (fast baseline)
Before full-reading any GDD, check for the entity registry:
```
Read path="design/registry/entities.yaml"
```
If the registry exists and has entries, use it as a **pre-built conflict
baseline**: known entities, items, formulas, and constants with their
authoritative values and source GDDs. In Phase 2, grep GDDs for registered
names first — this is faster than reading all GDDs in full before knowing
what to look for.
If the registry is empty or absent: proceed without it. Note in the report:
"Entity registry is empty — consistency checks rely on full GDD reads only.
Run `/consistency-check` after this review to populate the registry."
### Phase 1c — L1/L2: Full Document Load
Full-read the in-scope documents:
@ -105,8 +123,8 @@ reciprocal:
```
⚠️ Dependency Asymmetry
combat.md lists: Depends On → health-system.md
health-system.md does NOT list combat.md as a dependent
[system-a].md lists: Depends On → [system-b].md
[system-b].md does NOT list [system-a].md as a dependent
→ One of these documents has a stale dependency section
```
@ -116,10 +134,8 @@ For each game rule, mechanic, or constraint defined in any GDD, check whether
any other GDD defines a contradicting rule for the same situation:
Categories to scan:
- **Health and damage**: Does any GDD say damage floor is 1? Does any other say
armour can reduce damage to 0? These contradict.
- **Resource ownership**: If two GDDs both define how a shared resource (gold,
stamina, mana) accumulates or depletes, do they agree?
- **Floor/ceiling rules**: Does any GDD define a minimum value for an output? Does any other say a different system can bypass that floor? These contradict.
- **Resource ownership**: If two GDDs both define how a shared resource accumulates or depletes, do they agree?
- **State transitions**: If GDD-A describes what happens when a character dies,
does GDD-B's description of the same event agree?
- **Timing**: If GDD-A says "X happens on the same frame", does GDD-B assume
@ -129,9 +145,8 @@ Categories to scan:
```
🔴 Rule Contradiction
combat.md: "Minimum damage after armour reduction is 1"
status-effects.md: "Poison ignores armour and can reduce health by any amount,
including to 0"
[system-a].md: "Minimum [output] after reduction is [floor_value]"
[system-b].md: "[mechanic] bypasses [system-a]'s rules and can reduce [output] to 0"
→ These rules directly contradict. Which GDD is authoritative?
```
@ -143,8 +158,8 @@ with the same name and behaviour:
- If GDD-A says "combo multiplier from the combat system feeds into score", check
that the combat GDD actually defines a combo multiplier that outputs to score
- If GDD-A references "the XP curve defined in progression.md", check that
progression.md actually has an XP curve, not a flat-level system
- If GDD-A references "the progression curve defined in [system].md", check that
[system].md actually has that curve, not a different progression model
- If GDD-A was written before GDD-B and assumed a mechanic that GDD-B later
designed differently, flag GDD-A as containing a stale reference
@ -164,8 +179,8 @@ Tuning Knobs sections across all GDDs and flag duplicates:
```
⚠️ Ownership Conflict
combat.md Tuning Knobs: "base_damage_multiplier — controls damage scaling"
progression.md Tuning Knobs: "damage_multiplier — scales with player level"
[system-a].md Tuning Knobs: "[multiplier_name] — controls [output] scaling"
[system-b].md Tuning Knobs: "[multiplier_name] — scales [output] with [factor]"
→ Two GDDs define multipliers on the same output. Which owns the final value?
This will produce either a double-application bug or a design conflict.
```
@ -176,21 +191,20 @@ For GDDs whose formulas are connected (output of one feeds input of another),
check that the output range of the upstream formula is within the expected
input range of the downstream formula:
- If combat.md outputs damage values between 1500, and the health system is
designed for health values between 10100, a one-hit kill is almost always
possible — is that intended?
- If the economy GDD expects item prices between 11000 gold, and the
progression GDD generates gold at a rate of 505000 per session, the
economy will either be trivially easy or permanently locked — is that intended?
- If [system-a].md outputs values between [min][max], and [system-b].md is
designed to receive values between [min2][max2], is the mismatch intentional?
- If an economy GDD expects resource acquisition in range X, and the
progression GDD generates it at range Y, the economy will be trivial or
inaccessible — is that intended?
Flag incompatibilities as CONCERNS (design judgment needed, not necessarily wrong):
```
⚠️ Formula Range Mismatch
combat.md: Max damage output = 500 (at max level, max gear)
health-system.md: Base player health = 100, max health = 250
→ Late-game combat can one-shot a max-health player in a single hit.
Is this intentional? If not, either damage ceiling or health ceiling needs adjustment.
[system-a].md: Max [output] = [value_a] (at max [condition])
[system-b].md: Base [input] = [value_b], max [input] = [value_c]
→ Late-[stage] [scenario] can resolve in a single [event].
Is this intentional? If not, either [system-a]'s ceiling or [system-b]'s ceiling needs adjustment.
```
### 2f: Acceptance Criteria Cross-Check
@ -244,13 +258,13 @@ players. Present the count and flag if it exceeds 4 concurrent active systems:
```
⚠️ Cognitive Load Risk
Simultaneously active systems during combat:
1. combat.md — combat decisions (active)
2. stamina-system.md — stamina management (active)
3. status-effects.md — status tracking (active)
4. inventory.md — mid-combat item use (active)
5. ability-system.md — ability cooldown management (active)
6. companion-ai.md — companion command decisions (active)
Simultaneously active systems during [core loop moment]:
1. [system-a].md — [decision type] (active)
2. [system-b].md — [resource management] (active)
3. [system-c].md — [tracking] (active)
4. [system-d].md — [item/action use] (active)
5. [system-e].md — [cooldown/timer management] (active)
6. [system-f].md — [coordination decisions] (active)
→ 6 simultaneously active systems during the core loop.
Research suggests 3-4 is the comfortable limit for most players.
Consider: which of these can be made passive or simplified?
@ -512,9 +526,9 @@ Scenarios walked: [N]
| GDD | Reason | Type | Priority |
|-----|--------|------|----------|
| combat.md | Rule contradiction with status-effects.md | Consistency | Blocking |
| inventory.md | Stale reference to nonexistent formula | Consistency | Blocking |
| fishing.md | No pillar alignment | Design Theory | Warning |
| [system-a].md | Rule contradiction with [system-b].md | Consistency | Blocking |
| [system-c].md | Stale reference to nonexistent mechanic | Consistency | Blocking |
| [system-d].md | No pillar alignment | Design Theory | Warning |
---

View file

@ -10,8 +10,10 @@ model: haiku
# Scope Check
This skill is read-only — it reports findings but writes no files.
Compares original planned scope against current state to detect, quantify, and triage
scope creep. Read-only — never edits files without approval.
scope creep.
**Argument:** `$ARGUMENTS[0]` — feature name, sprint number, or milestone name.

View file

@ -74,8 +74,12 @@ Once the engine is chosen:
## 4. Update CLAUDE.md Technology Stack
Read `CLAUDE.md` and update the Technology Stack section. Replace the
`[CHOOSE]` placeholders with the actual values:
Read `CLAUDE.md` and show the user the proposed Technology Stack changes.
Ask: "May I write these engine settings to `CLAUDE.md`?"
Wait for confirmation before making any edits.
Update the Technology Stack section, replacing the `[CHOOSE]` placeholders with the actual values:
**For Godot:**
```markdown
@ -317,6 +321,10 @@ Create the full reference doc set by searching the web:
- Deprecated APIs with replacements
- New features and best practices
Ask: "May I create the engine reference docs under `docs/engine-reference/<engine>/`?"
Wait for confirmation before writing any files.
3. **Create the full reference directory**:
```
docs/engine-reference/<engine>/
@ -338,7 +346,9 @@ Create the full reference doc set by searching the web:
## 8. Update CLAUDE.md Import
Update the `@` import under "Engine Version Reference" to point to the
Ask: "May I update the `@` import in `CLAUDE.md` to point to the new engine reference?"
Wait for confirmation, then update the `@` import under "Engine Version Reference" to point to the
correct engine:
```markdown
@ -354,6 +364,8 @@ Godot to Unity), update it.
## 9. Update Agent Instructions
Ask: "May I add a Version Awareness section to the engine specialist agent files?" before making any edits.
For the chosen engine's specialist agents, verify they have a
"Version Awareness" section. If not, add one following the pattern in
the existing Godot specialist agents.
@ -512,6 +524,8 @@ Next Steps:
---
Verdict: **COMPLETE** — engine configured and reference docs populated.
## Guardrails
- NEVER guess an engine version — always verify via WebSearch or user confirmation

View file

@ -22,7 +22,7 @@ When this skill is invoked:
For `new`:
5. **Generate a sprint plan** following this format:
5. **Generate a sprint plan** following this format and present it to the user. Ask: "May I write this sprint plan to `production/sprints/sprint-[N].md`?" If yes, write the file, creating the directory if needed. Verdict: **COMPLETE** — sprint plan created. If no: Verdict: **BLOCKED** — user declined write.
```markdown
# Sprint [N] -- [Start Date] to [End Date]

View file

@ -8,37 +8,29 @@ allowed-tools: Read, Glob, Grep, AskUserQuestion
# Guided Onboarding
This skill is the entry point for new users. It does NOT assume you have a game
idea, an engine preference, or any prior experience. It asks first, then routes
you to the right workflow.
This skill is read-only — it reports findings but writes no files.
This skill is the entry point for new users. It does NOT assume you have a game idea, an engine preference, or any prior experience. It asks first, then routes you to the right workflow.
---
## Workflow
## Phase 1: Detect Project State
### 1. Detect Project State (Silent)
Before asking anything, silently gather context so you can tailor your guidance.
Do NOT show these results unprompted — they inform your recommendations, not
the conversation opener.
Before asking anything, silently gather context so you can tailor your guidance. Do NOT show these results unprompted — they inform your recommendations, not the conversation opener.
Check:
- **Engine configured?** Read `.claude/docs/technical-preferences.md`. If the
Engine field contains `[TO BE CONFIGURED]`, the engine is not set.
- **Engine configured?** Read `.claude/docs/technical-preferences.md`. If the Engine field contains `[TO BE CONFIGURED]`, the engine is not set.
- **Game concept exists?** Check for `design/gdd/game-concept.md`.
- **Source code exists?** Glob for source files in `src/` (`*.gd`, `*.cs`,
`*.cpp`, `*.h`, `*.rs`, `*.py`, `*.js`, `*.ts`).
- **Source code exists?** Glob for source files in `src/` (`*.gd`, `*.cs`, `*.cpp`, `*.h`, `*.rs`, `*.py`, `*.js`, `*.ts`).
- **Prototypes exist?** Check for subdirectories in `prototypes/`.
- **Design docs exist?** Count markdown files in `design/gdd/`.
- **Production artifacts?** Check for files in `production/sprints/` or
`production/milestones/`.
- **Production artifacts?** Check for files in `production/sprints/` or `production/milestones/`.
Store these findings internally. You will use them to validate the user's
self-assessment and to tailor follow-up recommendations.
Store these findings internally to validate the user's self-assessment and tailor recommendations.
---
### 2. Ask Where the User Is
## Phase 2: Ask Where the User Is
This is the first thing the user sees. Present these 4 options clearly:
@ -47,32 +39,26 @@ This is the first thing the user sees. Present these 4 options clearly:
> Before I suggest anything, I'd like to understand where you're starting from.
> Where are you at with your game idea right now?
>
> **A) No idea yet** — I don't have a game concept at all. I want to explore
> and figure out what to make.
> **A) No idea yet** — I don't have a game concept at all. I want to explore and figure out what to make.
>
> **B) Vague idea** — I have a rough theme, feeling, or genre in mind
> (e.g., "something with space" or "a cozy farming game") but nothing concrete.
> **B) Vague idea** — I have a rough theme, feeling, or genre in mind (e.g., "something with space" or "a cozy farming game") but nothing concrete.
>
> **C) Clear concept** — I know the core idea — genre, basic mechanics, maybe
> a pitch sentence — but haven't formalized it into documents yet.
> **C) Clear concept** — I know the core idea — genre, basic mechanics, maybe a pitch sentence — but haven't formalized it into documents yet.
>
> **D) Existing work** — I already have design docs, prototypes, code, or
> significant planning done. I want to organize or continue the work.
> **D) Existing work** — I already have design docs, prototypes, code, or significant planning done. I want to organize or continue the work.
Wait for the user's answer. Do not proceed until they respond.
---
### 3. Route Based on Answer
## Phase 3: Route Based on Answer
#### If A: No idea yet
The user needs creative exploration before anything else. Engine choice,
technical setup — all of that comes later.
The user needs creative exploration before anything else.
1. Acknowledge that starting from zero is completely fine
2. Briefly explain what `/brainstorm` does (guided ideation using professional
frameworks — MDA, player psychology, verb-first design)
2. Briefly explain what `/brainstorm` does (guided ideation using professional frameworks — MDA, player psychology, verb-first design)
3. Recommend running `/brainstorm open` as the next step
4. Show the recommended path:
- `/brainstorm` — discover your game concept
@ -83,73 +69,52 @@ technical setup — all of that comes later.
#### If B: Vague idea
The user has a seed but needs help growing it into a concept.
1. Ask them to share their vague idea — even a few words is enough
2. Validate the idea as a starting point (don't judge or redirect)
3. Recommend running `/brainstorm [their hint]` to develop it
4. Show the recommended path:
- `/brainstorm [hint]` — develop the idea into a full concept
- `/setup-engine` — configure the engine
- `/map-systems` — decompose the concept into systems and plan GDD writing order
- `/map-systems` — decompose the concept into systems
- `/prototype` — test the core mechanic
- `/sprint-plan` — plan the first sprint
#### If C: Clear concept
The user knows what they want to make but hasn't documented it.
1. Ask 2-3 follow-up questions to understand their concept:
1. Ask 2-3 follow-up questions:
- What's the genre and core mechanic? (one sentence)
- Do they have an engine preference, or need help choosing?
- What's the rough scope? (jam game, small project, large project)
2. Based on their answers, offer two paths:
- **Formalize first**: Run `/brainstorm` to structure the concept into a
proper game concept document with pillars, MDA analysis, and scope tiers
- **Jump to engine setup**: If they're confident in their concept, go
straight to `/setup-engine` and write the GDD manually afterward
3. Show the recommended path (adapted to their choice):
2. Offer two paths:
- **Formalize first**: Run `/brainstorm` to structure the concept into a proper game concept document
- **Jump to engine setup**: Go straight to `/setup-engine` and write the GDD manually afterward
3. Show the recommended path:
- `/brainstorm` or `/setup-engine` (their pick)
- `/design-review` — validate the concept doc
- `/map-systems` — decompose the concept into individual systems with dependencies and priorities
- `/design-system` — author per-system GDDs (guided, section-by-section)
- `/map-systems` — decompose the concept into individual systems
- `/design-system` — author per-system GDDs
- `/architecture-decision` — make first technical decisions
- `/sprint-plan` — plan the first sprint
#### If D: Existing work
The user has artifacts already. Two questions matter: what phase are they in,
and are their existing artifacts in a format the template's skills can use.
1. Share what you found in Step 1 (now it's relevant):
1. Share what you found in Phase 1:
- "I can see you have [X source files / Y design docs / Z prototypes]..."
- "Your engine is [configured as X / not yet configured]..."
2. **Distinguish two sub-cases based on what exists:**
**Sub-case D1 — Artifacts exist but engine is not configured / very early stage**
(game concept exists but no GDDs or ADRs):
- This is close to a greenfield project. Route normally:
2. **Sub-case D1 — Early stage** (engine not configured or only a game concept exists):
- Recommend `/setup-engine` first if engine not configured
- Then `/project-stage-detect` for a gap inventory
- Then pick up the normal pipeline from the detected phase
**Sub-case D2 — GDDs, ADRs, or stories already exist** (non-trivial existing work):
- The project needs a format compliance check, not just an existence check.
- Explain the distinction clearly:
> "Having files isn't the same as the template's skills being able to use them.
> GDDs might be missing required sections. ADRs might lack Status fields that
> story validation depends on. `/adopt` checks this specifically."
- Recommend this two-step path:
**Sub-case D2 — GDDs, ADRs, or stories already exist:**
- Explain: "Having files isn't the same as the template's skills being able to use them. GDDs might be missing required sections. `/adopt` checks this specifically."
- Recommend:
1. `/project-stage-detect` — understand what phase and what's missing entirely
2. `/adopt` — audit whether existing artifacts are in the right internal format,
and get a numbered migration plan to bring them up to standard
- Note: `/adopt` is what produces the actionable "what to fix" plan. After that,
existing skills handle each fix: `/design-system retrofit`, `/architecture-decision retrofit`, etc.
2. `/adopt` — audit whether existing artifacts are in the right internal format
3. Show the recommended path for D2:
- `/project-stage-detect` — phase detection + existence gaps
- `/adopt`**format compliance audit + migration plan** (the key brownfield tool)
- `/adopt` — format compliance audit + migration plan
- `/setup-engine` — if engine not configured
- `/design-system retrofit [path]` — fill missing GDD sections
- `/architecture-decision retrofit [path]` — add missing ADR sections
@ -158,45 +123,33 @@ and are their existing artifacts in a format the template's skills can use.
---
### 4. Confirm Before Proceeding
## Phase 4: Confirm Before Proceeding
After presenting the recommended path, ask the user which step they'd like
to take first. Never auto-run the next skill.
After presenting the recommended path, ask the user which step they'd like to take first. Never auto-run the next skill.
> "Would you like to start with [recommended first step], or would you prefer
> to do something else first?"
> "Would you like to start with [recommended first step], or would you prefer to do something else first?"
---
### 5. Hand Off
## Phase 5: Hand Off
When the user chooses their next step, let them invoke the skill themselves
or offer to run it for them. Either way, the `/start` skill's job is done
once the user has a clear next action.
When the user chooses their next step, let them invoke the skill themselves or offer to run it for them. The `/start` skill's job is done once the user has a clear next action.
Verdict: **COMPLETE** — user oriented and handed off to next step.
---
## Edge Cases
- **User picks D but project is empty**: Gently redirect — "It looks like the
project is a fresh template with no artifacts yet. Would Path A or B be a
better fit?"
- **User picks A but project has code**: Mention what you found — "I noticed
there's already code in `src/`. Did you mean to pick D (existing work)? Or
would you like to start fresh with a new concept?"
- **User is returning (engine configured, concept exists)**: Skip onboarding
entirely — "It looks like you're already set up! Your engine is [X] and you
have a game concept at `design/gdd/game-concept.md`. Want to pick up where
you left off? Try `/sprint-plan` or just tell me what you'd like to work on."
- **User doesn't fit any option**: Let them describe their situation in their
own words and adapt. The 4 options are starting points, not a prison.
- **User picks D but project is empty**: Gently redirect — "It looks like the project is a fresh template with no artifacts yet. Would Path A or B be a better fit?"
- **User picks A but project has code**: Mention what you found — "I noticed there's already code in `src/`. Did you mean to pick D (existing work)?"
- **User is returning (engine configured, concept exists)**: Skip onboarding entirely — "It looks like you're already set up! Your engine is [X] and you have a game concept at `design/gdd/game-concept.md`. Want to pick up where you left off? Try `/sprint-plan` or just tell me what you'd like to work on."
- **User doesn't fit any option**: Let them describe their situation in their own words and adapt.
---
## Collaborative Protocol
This skill follows the collaborative design principle:
1. **Ask first** — never assume the user's state or intent
2. **Present options** — give clear paths, not mandates
3. **User decides** — they pick the direction

View file

@ -6,7 +6,10 @@ user-invocable: true
allowed-tools: Read, Glob, Grep, Write, Edit, Bash, Task, AskUserQuestion, TodoWrite
---
When this skill is invoked, orchestrate the audio team through a structured pipeline.
If no argument is provided, output usage guidance and exit without spawning any agents:
> Usage: `/team-audio [feature or area]` — specify the feature or area to design audio for (e.g., `combat`, `main menu`, `forest biome`, `boss encounter`). Do not use `AskUserQuestion` here; output the guidance directly.
When this skill is invoked with an argument, orchestrate the audio team through a structured pipeline.
**Decision Points:** At each step transition, use `AskUserQuestion` to present
the user with the subagent's proposals as selectable options. Write the agent's
@ -89,6 +92,24 @@ Spawn the `gameplay-programmer` agent to:
6. **Output a summary** with: audio event count, estimated asset count,
implementation tasks, and any open questions between team members.
Verdict: **COMPLETE** — audio design document produced and team pipeline finished.
If the pipeline stops because a dependency is unresolved (e.g., critical accessibility gap or missing GDD not resolved by the user):
Verdict: **BLOCKED** — [reason]
## File Write Protocol
All file writes (audio design docs, SFX specs, implementation files) are delegated
to sub-agents spawned via Task. Each sub-agent enforces the "May I write to [path]?"
protocol. This orchestrator does not write files directly.
## Next Steps
- Review the audio design doc with the audio-director before implementation begins.
- Use `/dev-story` to implement the audio manager and event system once the design is approved.
- Run `/asset-audit` after audio assets are created to verify naming and format compliance.
## Error Recovery Protocol
If any spawned agent (via Task) returns BLOCKED, errors, or cannot complete:

View file

@ -5,7 +5,11 @@ argument-hint: "[combat feature description]"
user-invocable: true
allowed-tools: Read, Glob, Grep, Write, Edit, Bash, Task, AskUserQuestion, TodoWrite
---
When this skill is invoked, orchestrate the combat team through a structured pipeline.
**Argument check:** If no combat feature description is provided, output:
> "Usage: `/team-combat [combat feature description]` — Provide a description of the combat feature to design and implement (e.g., `melee parry system`, `ranged weapon spread`)."
Then stop immediately without spawning any subagents or reading any files.
When this skill is invoked with a valid argument, orchestrate the combat team through a structured pipeline.
**Decision Points:** At each phase transition, use `AskUserQuestion` to present
the user with the subagent's proposals as selectable options. Write the agent's
@ -96,5 +100,21 @@ Common blockers:
- Scope too large → split into two stories via `/create-stories`
- Conflicting instructions between ADR and story → surface the conflict, do not guess
## File Write Protocol
All file writes (design documents, implementation files, test cases) are
delegated to sub-agents spawned via Task. Each sub-agent enforces the
"May I write to [path]?" protocol. This orchestrator does not write files directly.
## Output
A summary report covering: design completion status, implementation status per team member, test results, and any open issues.
Verdict: **COMPLETE** — combat feature designed, implemented, and validated.
Verdict: **BLOCKED** — one or more phases could not complete; partial report produced with unresolved items listed.
## Next Steps
- Run `/code-review` on the implemented combat code before closing stories.
- Run `/balance-check` to validate combat formulas and tuning values.
- Run `/team-polish` if VFX, audio, or performance polish is needed.

View file

@ -49,6 +49,8 @@ Spawn the `world-builder` agent to:
- Define environmental storytelling opportunities
- Specify any world rules that affect gameplay in this area
**Gate**: Use `AskUserQuestion` to present Step 1 outputs and confirm before proceeding to Step 2.
### Step 2: Layout and Encounter Design (level-designer)
Spawn the `level-designer` agent to:
- Design the spatial layout (critical path, optional paths, secrets)
@ -58,6 +60,17 @@ Spawn the `level-designer` agent to:
- Define points of interest and landmarks for wayfinding
- Specify entry/exit points and connections to adjacent areas
**Adjacent area dependency check**: After the layout is produced, check `design/levels/` for each adjacent area referenced by the level-designer. If any referenced area's `.md` file does not exist, surface the gap:
> "Level references [area-name] as an adjacent area but `design/levels/[area-name].md` does not exist."
Use `AskUserQuestion` with options:
- (a) Proceed with a placeholder reference — mark the connection as UNRESOLVED in the level doc and list it in the open cross-level dependencies section of the summary report
- (b) Pause and run `/team-level [area-name]` first to establish that area
Do NOT invent content for the missing adjacent area.
**Gate**: Use `AskUserQuestion` to present Step 2 layout (including any unresolved adjacent area dependencies) and confirm before proceeding to Step 3.
### Step 3: Systems Integration (systems-designer)
Spawn the `systems-designer` agent to:
- Specify enemy compositions and encounter formulas
@ -66,6 +79,8 @@ Spawn the `systems-designer` agent to:
- Design any area-specific mechanics or environmental hazards
- Specify resource distribution (health pickups, save points, shops)
**Gate**: Use `AskUserQuestion` to present Step 3 outputs and confirm before proceeding to Step 4.
### Step 4: Visual Direction and Accessibility (parallel)
Spawn the `art-director` agent to:
- Define the visual theme and color palette for the area
@ -81,6 +96,14 @@ Spawn the `accessibility-specialist` agent in parallel to:
- Check that key gameplay areas have sufficient contrast for colorblind players
- Output: accessibility concerns list with severity (BLOCKING / RECOMMENDED / NICE TO HAVE)
Wait for both agents to return before proceeding.
**Gate**: Use `AskUserQuestion` to present both Step 4 results. If the accessibility-specialist returned any BLOCKING concerns, highlight them prominently and offer:
- (a) Return to level-designer and art-director to redesign the flagged elements before Step 5
- (b) Document as a known accessibility gap and proceed to Step 5 with the concern explicitly logged in the final report
Do NOT proceed to Step 5 without the user acknowledging any BLOCKING accessibility concerns.
### Step 5: QA Planning (qa-tester)
Spawn the `qa-tester` agent to:
- Write test cases for the critical path
@ -94,7 +117,24 @@ Spawn the `qa-tester` agent to:
5. **Save to** `design/levels/[level-name].md`.
6. **Output a summary** with: area overview, encounter count, estimated asset
list, narrative beats, and any cross-team dependencies or open questions.
list, narrative beats, any cross-team dependencies or open questions, open
cross-level dependencies (adjacent areas referenced but not yet designed, each
marked UNRESOLVED), and accessibility concerns with their resolution status.
## File Write Protocol
All file writes (level design docs, narrative docs, test checklists) are delegated
to sub-agents spawned via Task. Each sub-agent enforces the "May I write to [path]?"
protocol. This orchestrator does not write files directly.
Verdict: **COMPLETE** — level design document produced and all team outputs compiled.
Verdict: **BLOCKED** — one or more agents blocked; partial report produced with unresolved items listed.
## Next Steps
- Run `/design-review design/levels/[level-name].md` to validate the completed level design doc.
- Run `/dev-story` to implement level content once the design is approved.
- Run `/qa-plan` to generate a QA test plan for this level.
## Error Recovery Protocol

View file

@ -5,7 +5,11 @@ argument-hint: "[season name or event description]"
user-invocable: true
allowed-tools: Read, Glob, Grep, Write, Edit, Bash, Task, AskUserQuestion, TodoWrite
---
When this skill is invoked, orchestrate the live-ops team through a structured planning pipeline.
**Argument check:** If no season name or event description is provided, output:
> "Usage: `/team-live-ops [season name or event description]` — Provide the name or description of the season or live event to plan."
Then stop immediately without spawning any subagents or reading any files.
When this skill is invoked with a valid argument, orchestrate the live-ops team through a structured planning pipeline.
**Decision Points:** At each phase transition, use `AskUserQuestion` to present
the user with the subagent's proposals as selectable options. Write the agent's
@ -94,16 +98,48 @@ Present a summary to the user with:
- **Content scope**: what is being created
- **Economy health check**: does the reward track feel fair and non-predatory?
- **Analytics readiness**: are success criteria defined and instrumented?
- **Ethics review**: flag any element that violates the ethics policy in `design/live-ops/ethics-policy.md`
- **Ethics review**: check the Phase 3 economy design against `design/live-ops/ethics-policy.md`
- If the file does not exist: flag "ETHICS REVIEW SKIPPED: `design/live-ops/ethics-policy.md` not found. Economy design was not reviewed against an ethics policy. Recommend creating one before production begins." Include this flag in the season design output document. Add to next steps: create `design/live-ops/ethics-policy.md`.
- If the file exists and a violation is found: flag "ETHICS FLAG: [element] in Phase 3 economy design violates [policy rule]. Approval is blocked until this is resolved." Do NOT issue a COMPLETE verdict or write output documents. Use `AskUserQuestion` with options: revise economy design / override with documented rationale / cancel. If user chooses to revise: re-spawn economy-designer to produce a corrected design, then return to Phase 7 review.
- **Open questions**: decisions still needed before production begins
Ask the user to approve the season plan before delegating to production teams.
Ask the user to approve the season plan before delegating to production teams. Issue the COMPLETE verdict only after the user approves and no unresolved ethics violations remain. If an ethics violation is unresolved, end with Verdict: **BLOCKED**.
## Output Documents
All documents save to `design/live-ops/`:
- `seasons/S[N]_[name].md` — Season design document (from Phase 1-3)
- `seasons/S[N]_[name]_analytics.md` — Analytics plan (from Phase 4)
- `seasons/S[N]_[name]_comms.md` — Communication calendar (from Phase 6)
## Error Recovery Protocol
If any spawned agent (via Task) returns BLOCKED, errors, or cannot complete:
1. **Surface immediately**: Report "[AgentName]: BLOCKED — [reason]" to the user before continuing to dependent phases
2. **Assess dependencies**: Check whether the blocked agent's output is required by subsequent phases. If yes, do not proceed past that dependency point without user input.
3. **Offer options** via AskUserQuestion with choices:
- Skip this agent and note the gap in the final report
- Retry with narrower scope
- Stop here and resolve the blocker first
4. **Always produce a partial report** — output whatever was completed. Never discard work because one agent blocked.
If a BLOCKED state is unresolvable, end with Verdict: **BLOCKED** instead of COMPLETE.
## File Write Protocol
All file writes (season design docs, analytics plans, communication calendars) are
delegated to sub-agents spawned via Task. Each sub-agent enforces the
"May I write to [path]?" protocol. This orchestrator does not write files directly.
## Output
A summary covering: season theme and scope, economy design highlights, success metrics, content list, communication plan, and any open decisions needing user input before production.
Verdict: **COMPLETE** — season plan produced and handed off for production.
## Next Steps
- Run `/design-review` on the season design document for consistency validation.
- Run `/sprint-plan` to schedule content creation work for the season.
- Run `/team-release` when the season content is ready to deploy.

View file

@ -5,7 +5,10 @@ argument-hint: "[narrative content description]"
user-invocable: true
allowed-tools: Read, Glob, Grep, Write, Edit, Task, AskUserQuestion, TodoWrite
---
When this skill is invoked, orchestrate the narrative team through a structured pipeline.
If no argument is provided, output usage guidance and exit without spawning any agents:
> Usage: `/team-narrative [narrative content description]` — describe the story content, scene, or narrative area to work on (e.g., `boss encounter cutscene`, `faction intro dialogue`, `tutorial narrative`). Do not use `AskUserQuestion` here; output the guidance directly.
When this skill is invoked with an argument, orchestrate the narrative team through a structured pipeline.
**Decision Points:** At each phase transition, use `AskUserQuestion` to present
the user with the subagent's proposals as selectable options. Write the agent's
@ -82,5 +85,24 @@ Common blockers:
- Scope too large → split into two stories via `/create-stories`
- Conflicting instructions between ADR and story → surface the conflict, do not guess
## File Write Protocol
All file writes (narrative docs, dialogue files, lore entries) are delegated to
sub-agents spawned via Task. Each sub-agent enforces the "May I write to [path]?"
protocol. This orchestrator does not write files directly.
## Output
A summary report covering: narrative brief status, lore entries created/updated, dialogue lines written, level narrative integration points, consistency review results, and any unresolved contradictions.
Verdict: **COMPLETE** — narrative content delivered.
If the pipeline stops because a dependency is unresolved (e.g., lore contradiction or missing prerequisite not resolved by the user):
Verdict: **BLOCKED** — [reason]
## Next Steps
- Run `/design-review` on the narrative documents for consistency validation.
- Run `/localize extract` to extract new strings for translation after dialogue is finalized.
- Run `/dev-story` to implement dialogue triggers and narrative events in-engine.

View file

@ -5,7 +5,10 @@ argument-hint: "[feature or area to polish]"
user-invocable: true
allowed-tools: Read, Glob, Grep, Write, Edit, Bash, Task, AskUserQuestion, TodoWrite
---
When this skill is invoked, orchestrate the polish team through a structured pipeline.
If no argument is provided, output usage guidance and exit without spawning any agents:
> Usage: `/team-polish [feature or area]` — specify the feature or area to polish (e.g., `combat`, `main menu`, `inventory system`, `level-1`). Do not use `AskUserQuestion` here; output the guidance directly.
When this skill is invoked with an argument, orchestrate the polish team through a structured pipeline.
**Decision Points:** At each phase transition, use `AskUserQuestion` to present
the user with the subagent's proposals as selectable options. Write the agent's
@ -104,5 +107,18 @@ Common blockers:
- Scope too large → split into two stories via `/create-stories`
- Conflicting instructions between ADR and story → surface the conflict, do not guess
## File Write Protocol
All file writes (performance reports, test results, evidence docs) are delegated to
sub-agents spawned via Task. Each sub-agent enforces the "May I write to [path]?"
protocol. This orchestrator does not write files directly.
## Output
A summary report covering: performance before/after metrics, visual polish changes, audio polish changes, test results, and release readiness assessment.
## Next Steps
- If READY FOR RELEASE: run `/release-checklist` for the final pre-release validation.
- If NEEDS MORE WORK: schedule remaining issues in `/sprint-plan update` and re-run `/team-polish` after fixes.
- Run `/gate-check` for a formal phase gate verdict before handing off to release.

View file

@ -226,3 +226,6 @@ Common blockers:
## Output
A summary covering: stories in scope, smoke check result, manual QA results, bugs filed (with IDs and severities), and the final APPROVED / APPROVED WITH CONDITIONS / NOT APPROVED verdict.
Verdict: **COMPLETE** — QA cycle finished.
Verdict: **BLOCKED** — smoke check failed or critical blocker prevented cycle completion; partial report produced.

View file

@ -5,6 +5,11 @@ argument-hint: "[version number or 'next']"
user-invocable: true
allowed-tools: Read, Glob, Grep, Write, Edit, Bash, Task, AskUserQuestion, TodoWrite
---
**Argument check:** If no version number is provided:
1. Read `production/session-state/active.md` and the most recent file in `production/milestones/` (if they exist) to infer the target version.
2. If a version is found: report "No version argument provided — inferred [version] from milestone data. Proceeding." Then confirm with `AskUserQuestion`: "Releasing [version]. Is this correct?"
3. If no version is discoverable: use `AskUserQuestion` to ask "What version number should be released? (e.g., v1.0.0)" and wait for user input before proceeding. Do NOT default to a hardcoded version string.
When this skill is invoked, orchestrate the release team through a structured pipeline.
**Decision Points:** At each phase transition, use `AskUserQuestion` to present
@ -68,11 +73,21 @@ Delegate (can run in parallel with Phase 3 if resources available):
### Phase 5: Go/No-Go
Delegate to **producer**:
- Collect sign-off from: qa-lead, release-manager, devops-engineer, technical-director
- Collect sign-off from: qa-lead, release-manager, devops-engineer, security-engineer (if spawned in Phase 3), network-programmer (if spawned in Phase 3), and technical-director
- Evaluate any open issues — are they blocking or can they ship?
- Make the go/no-go call
- Output: release decision with rationale
**If producer declares NO-GO:**
- Surface the decision immediately: "PRODUCER: NO-GO — [rationale, e.g., S1 bug found in Phase 3]."
- Use `AskUserQuestion` with options:
- Fix the blocker and re-run the affected phase
- Defer the release to a later date
- Override NO-GO with documented rationale (user must provide written justification)
- **Skip Phase 6 entirely** — do not tag, deploy to staging, deploy to production, or spawn community-manager.
- Produce a partial report summarizing Phases 15 and what was skipped (Phase 6) and why.
- Verdict: **BLOCKED** — release not deployed.
### Phase 6: Deployment (if GO)
Delegate to **release-manager** + **devops-engineer**:
- Tag the release in version control
@ -113,5 +128,21 @@ Common blockers:
- Scope too large → split into two stories via `/create-stories`
- Conflicting instructions between ADR and story → surface the conflict, do not guess
## File Write Protocol
All file writes (release checklists, changelogs, patch notes, deployment scripts) are
delegated to sub-agents and sub-skills. Each enforces the "May I write to [path]?"
protocol. This orchestrator does not write files directly.
## Output
A summary report covering: release version, scope, quality gate results, go/no-go decision, deployment status, and monitoring plan.
Verdict: **COMPLETE** — release executed and deployed.
Verdict: **BLOCKED** — release halted; go/no-go was NO or a hard blocker is unresolved.
## Next Steps
- Monitor post-release dashboards for 48 hours.
- Run `/retrospective` if significant issues occurred during the release.
- Update `production/stage.txt` to `Live` after successful deployment.

View file

@ -47,6 +47,15 @@ Before designing anything, read and synthesize:
- `design/ux/interaction-patterns.md` — existing patterns to reuse (not reinvent)
- `design/accessibility-requirements.md` — committed accessibility tier (e.g., Basic, Enhanced, Full)
**If `design/ux/interaction-patterns.md` does not exist**, surface the gap immediately:
> "interaction-patterns.md does not exist — no existing patterns to reuse."
Then use `AskUserQuestion` with options:
- (a) Run `/ux-design patterns` first to establish the pattern library, then continue
- (b) Proceed without the pattern library — ui-programmer will treat all patterns created as new and add each to a new `design/ux/interaction-patterns.md` at completion
Do NOT invent or assume patterns from the feature name or GDD alone. If the user chooses (b), explicitly instruct ui-programmer in Phase 3 to treat all patterns as new and document them in `design/ux/interaction-patterns.md` when implementation is complete. Note the pattern library status (created / absent / updated) in the final summary report.
Summarize the context in a brief for the ux-designer: what the player is doing, what they need, what constraints apply, and which existing patterns are relevant.
### Phase 1b: UX Spec Authoring
@ -141,6 +150,21 @@ Common blockers:
- Scope too large → split into two stories via `/create-stories`
- Conflicting instructions between ADR and story → surface the conflict, do not guess
## File Write Protocol
All file writes (UX specs, interaction pattern library updates, implementation files) are
delegated to sub-agents and sub-skills (`/ux-design`, `ui-programmer`). Each enforces the
"May I write to [path]?" protocol. This orchestrator does not write files directly.
## Output
A summary report covering: UX spec status, UX review verdict, visual design status, implementation status, accessibility compliance, input method support, interaction pattern library update status, and any outstanding issues.
Verdict: **COMPLETE** — UI feature delivered through full pipeline (UX spec → visual → implementation → review → polish).
Verdict: **BLOCKED** — pipeline halted; surface the blocker and its phase before stopping.
## Next Steps
- Run `/ux-review` on the final spec if not yet approved.
- Run `/code-review` on the UI implementation before closing stories.
- Run `/team-polish` if visual or audio polish pass is needed.

View file

@ -5,51 +5,102 @@ argument-hint: "[scan|add|prioritize|report]"
user-invocable: true
allowed-tools: Read, Glob, Grep, Write
---
When this skill is invoked:
1. **Parse the subcommand** from the argument:
- `scan` — Scan the codebase for tech debt indicators
- `add` — Add a new tech debt entry manually
- `prioritize` — Re-prioritize the existing debt register
- `report` — Generate a summary report of current debt status
## Phase 1: Parse Subcommand
2. **For `scan`**:
- Search the codebase for debt indicators:
- `TODO` comments (count and categorize)
- `FIXME` comments (these are bugs disguised as debt)
- `HACK` comments (workarounds that need proper solutions)
- `@deprecated` markers
- Duplicated code blocks (similar patterns in multiple files)
- Files over 500 lines (potential god objects)
- Functions over 50 lines (potential complexity)
- Categorize each finding:
- **Architecture Debt**: Wrong abstractions, missing patterns, coupling issues
- **Code Quality Debt**: Duplication, complexity, naming, missing types
- **Test Debt**: Missing tests, flaky tests, untested edge cases
- **Documentation Debt**: Missing docs, outdated docs, undocumented APIs
- **Dependency Debt**: Outdated packages, deprecated APIs, version conflicts
- **Performance Debt**: Known slow paths, unoptimized queries, memory issues
- Update the debt register at `docs/tech-debt-register.md`
Determine the mode from the argument:
3. **For `add`**:
- Prompt for: description, category, affected files, estimated fix effort, impact if left unfixed
- Append to the debt register
- `scan` — Scan the codebase for tech debt indicators
- `add` — Add a new tech debt entry manually
- `prioritize` — Re-prioritize the existing debt register
- `report` — Generate a summary report of current debt status
4. **For `prioritize`**:
- Read the debt register
- Score each item by: `(impact_if_unfixed * frequency_of_encounter) / fix_effort`
- Re-sort the register by priority score
- Recommend which items to include in the next sprint
If no subcommand is provided, output usage and stop. Verdict: **FAIL** — missing required subcommand.
5. **For `report`**:
- Read the debt register
- Generate summary statistics:
- Total items by category
- Total estimated fix effort
- Items added vs resolved since last report
- Trending direction (growing / stable / shrinking)
- Flag any items that have been in the register for more than 3 sprints
- Output the report
---
## Phase 2A: Scan Mode
Search the codebase for debt indicators:
- `TODO` comments (count and categorize)
- `FIXME` comments (these are bugs disguised as debt)
- `HACK` comments (workarounds that need proper solutions)
- `@deprecated` markers
- Duplicated code blocks (similar patterns in multiple files)
- Files over 500 lines (potential god objects)
- Functions over 50 lines (potential complexity)
Categorize each finding:
- **Architecture Debt**: Wrong abstractions, missing patterns, coupling issues
- **Code Quality Debt**: Duplication, complexity, naming, missing types
- **Test Debt**: Missing tests, flaky tests, untested edge cases
- **Documentation Debt**: Missing docs, outdated docs, undocumented APIs
- **Dependency Debt**: Outdated packages, deprecated APIs, version conflicts
- **Performance Debt**: Known slow paths, unoptimized queries, memory issues
Present the findings to the user.
Ask: "May I write these findings to `docs/tech-debt-register.md`?"
If yes, update the register (append new entries, do not overwrite existing ones). Verdict: **COMPLETE** — scan findings written to register.
If no, stop here. Verdict: **BLOCKED** — user declined write.
---
## Phase 2B: Add Mode
Prompt for: description, category, affected files, estimated fix effort, impact if left unfixed.
Present the new entry to the user.
Ask: "May I append this entry to `docs/tech-debt-register.md`?"
If yes, append the entry. Verdict: **COMPLETE** — entry added to register.
If no, stop here. Verdict: **BLOCKED** — user declined write.
---
## Phase 2C: Prioritize Mode
Read the debt register at `docs/tech-debt-register.md`.
Score each item by: `(impact_if_unfixed × frequency_of_encounter) / fix_effort`
Re-sort the register by priority score and recommend which items to include in the next sprint.
Present the re-prioritized register to the user.
Ask: "May I write the re-prioritized register back to `docs/tech-debt-register.md`?"
If yes, write the updated file. Verdict: **COMPLETE** — register re-prioritized and saved.
If no, stop here. Verdict: **BLOCKED** — user declined write.
---
## Phase 2D: Report Mode
Read the debt register. Generate summary statistics:
- Total items by category
- Total estimated fix effort
- Items added vs resolved since last report
- Trending direction (growing / stable / shrinking)
Flag any items that have been in the register for more than 3 sprints.
Output the report to the user. This mode is read-only — no files are written. Verdict: **COMPLETE** — debt report generated.
---
## Phase 3: Next Steps
- Run `/sprint-plan` to schedule high-priority debt items into the next sprint.
- Run `/tech-debt report` at the start of each sprint to track debt trends over time.
### Debt Register Format

View file

@ -236,6 +236,8 @@ After the report:
- For missing sign-offs: "Manual sign-off is required from [role]. Share
`[evidence-path]` with them to complete sign-off."
Verdict: **COMPLETE** — evidence review finished. Use CONCERNS if BLOCKING items were found.
---
## Collaborative Protocol

View file

@ -206,6 +206,6 @@ After writing:
- **Fix is always the goal** — quarantine is temporary; surface the fix
direction even when recommending quarantine
- **Ask before writing** — both the regression-suite update and the report
file require explicit approval
file require explicit approval. On write: Verdict: **COMPLETE** — flakiness report written. On decline: Verdict: **BLOCKED** — user declined write.
- **Flakiness in CI is a team problem** — surface the list and recommended
actions clearly; do not just silently quarantine without the team knowing

View file

@ -368,7 +368,7 @@ Ask: "May I write these helper files to `tests/helpers/`?"
"Skipping `[path]` — already exists. Remove the file manually if you want it
regenerated."
After writing:
After writing: Verdict: **COMPLETE** — helper files created.
"Helper files created. To use them in a test:
- Godot: `class_name` is auto-imported — no explicit import needed
@ -387,3 +387,9 @@ After writing:
- **Helpers should reflect the GDD** — bounds and constants in helpers should
trace to GDD Formulas sections, not invented values
- **Ask before writing** — always confirm before creating files in `tests/`
## Next Steps
- Run `/test-setup` if the test framework has not been scaffolded yet.
- Use `/dev-story` to implement stories — helpers reduce boilerplate in new test files.
- Run `/skill-test` to validate other skills that may need helper coverage.

View file

@ -406,6 +406,8 @@ Gate note: /gate-check Technical Setup → Pre-Production now requires:
- .github/workflows/tests.yml
- At least one example test file
Run /test-setup and write one example test before advancing.
Verdict: **COMPLETE** — test framework scaffolded and CI/CD wired up.
```
---

View file

@ -777,3 +777,5 @@ a requirement. Never silently expand the layout without flagging it.
**Never** write a section without user approval.
**Never** contradict an existing approved UX spec without flagging the conflict.
**Always** show where decisions come from (GDD requirements, player journey, user choices).
Verdict: **COMPLETE** — UX spec written and approved section by section.

4
.gitignore vendored
View file

@ -19,6 +19,10 @@ production/session-state/*.md
expansions/
# === Runtime Artifacts (auto-generated, start empty on fresh clone) ===
# Created on first use by /consistency-check and /architecture-review
docs/consistency-failures.md
# === Build Output ===
build/
builds/

Some files were not shown because too many files have changed in this diff Show more