mirror of
https://github.com/Donchitos/Claude-Code-Game-Studios
synced 2026-04-21 13:27:18 +00:00
Add director gates system: shared review checkpoints across all workflow skills
Creates .claude/docs/director-gates.md as a central registry of 18 named gate prompts (CD-*, TD-*, PR-*, LP-*, QL-*, ND-*, AD-*) covering all 7 production stages. Skills now reference gate IDs instead of embedding inline director prompts, eliminating drift when prompts need updating. Updated 15 skills to use gate IDs: brainstorm, map-systems, design-system, architecture-decision, create-architecture, create-epics, create-stories, sprint-plan, milestone-review, playtest-report, prototype, story-done, gate-check, setup-engine, start. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
parent
167fb6c5f2
commit
b139bcf087
16 changed files with 997 additions and 106 deletions
672
.claude/docs/director-gates.md
Normal file
672
.claude/docs/director-gates.md
Normal file
|
|
@ -0,0 +1,672 @@
|
|||
# Director Gates — Shared Review Pattern
|
||||
|
||||
This document defines the standard gate prompts for all director and lead reviews
|
||||
across every workflow stage. Skills reference gate IDs from this document instead
|
||||
of embedding full prompts inline — eliminating drift when prompts need updating.
|
||||
|
||||
**Scope**: All 7 production stages (Concept → Release), all 3 Tier 1 directors,
|
||||
all key Tier 2 leads. Any skill, team orchestrator, or workflow may invoke these gates.
|
||||
|
||||
---
|
||||
|
||||
## How to Use This Document
|
||||
|
||||
In any skill, replace an inline director prompt with a reference:
|
||||
|
||||
```
|
||||
Spawn `creative-director` via Task using gate **CD-PILLARS** from
|
||||
`.claude/docs/director-gates.md`.
|
||||
```
|
||||
|
||||
Pass the context listed under that gate's **Context to pass** field, then handle
|
||||
the verdict using the **Verdict handling** rules below.
|
||||
|
||||
---
|
||||
|
||||
## Invocation Pattern (copy into any skill)
|
||||
|
||||
```
|
||||
Spawn `[agent-name]` via Task:
|
||||
- Gate: [GATE-ID] (see .claude/docs/director-gates.md)
|
||||
- Context: [fields listed under that gate]
|
||||
- Await the verdict before proceeding.
|
||||
```
|
||||
|
||||
For parallel spawning (multiple directors at the same gate point):
|
||||
|
||||
```
|
||||
Spawn all [N] agents simultaneously via Task — issue all Task calls before
|
||||
waiting for any result. Collect all verdicts before proceeding.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Standard Verdict Format
|
||||
|
||||
All gates return one of three verdicts. Skills must handle all three:
|
||||
|
||||
| Verdict | Meaning | Default action |
|
||||
|---------|---------|----------------|
|
||||
| **APPROVE / READY** | No issues. Proceed. | Continue the workflow |
|
||||
| **CONCERNS [list]** | Issues present but not blocking. | Surface to user via `AskUserQuestion` — options: `Revise flagged items` / `Accept and proceed` / `Discuss further` |
|
||||
| **REJECT / NOT READY [blockers]** | Blocking issues. Do not proceed. | Surface blockers to user. Do not write files or advance stage until resolved. |
|
||||
|
||||
**Escalation rule**: When multiple directors are spawned in parallel, apply the
|
||||
strictest verdict — one NOT READY overrides all READY verdicts.
|
||||
|
||||
---
|
||||
|
||||
## Recording Gate Outcomes
|
||||
|
||||
After a gate resolves, record the verdict in the relevant document's status header:
|
||||
|
||||
```markdown
|
||||
> **[Director] Review ([GATE-ID])**: APPROVED [date] / CONCERNS (accepted) [date] / REVISED [date]
|
||||
```
|
||||
|
||||
For phase gates, record in `docs/architecture/architecture.md` or
|
||||
`production/session-state/active.md` as appropriate.
|
||||
|
||||
---
|
||||
|
||||
## Tier 1 — Creative Director Gates
|
||||
|
||||
Agent: `creative-director` | Model tier: Opus | Domain: Vision, pillars, player experience
|
||||
|
||||
---
|
||||
|
||||
### CD-PILLARS — Pillar Stress Test
|
||||
|
||||
**Trigger**: After game pillars and anti-pillars are defined (brainstorm Phase 4,
|
||||
or any time pillars are revised)
|
||||
|
||||
**Context to pass**:
|
||||
- Full pillar set with names, definitions, and design tests
|
||||
- Anti-pillars list
|
||||
- Core fantasy statement
|
||||
- Unique hook ("Like X, AND ALSO Y")
|
||||
|
||||
**Prompt**:
|
||||
> "Review these game pillars. Are they falsifiable — could a real design decision
|
||||
> actually fail this pillar? Do they create meaningful tension with each other? Do
|
||||
> they differentiate this game from its closest comparables? Would they help resolve
|
||||
> a design disagreement in practice, or are they too vague to be useful? Return
|
||||
> specific feedback for each pillar and an overall verdict: APPROVE (strong), CONCERNS
|
||||
> [list] (needs sharpening), or REJECT (weak — pillars do not carry weight)."
|
||||
|
||||
**Verdicts**: APPROVE / CONCERNS / REJECT
|
||||
|
||||
---
|
||||
|
||||
### CD-GDD-ALIGN — GDD Pillar Alignment Check
|
||||
|
||||
**Trigger**: After a system GDD is authored (design-system, quick-design, or any
|
||||
workflow that produces a GDD)
|
||||
|
||||
**Context to pass**:
|
||||
- GDD file path
|
||||
- Game pillars (from `design/gdd/game-concept.md` or `design/gdd/game-pillars.md`)
|
||||
- MDA aesthetics target for this game
|
||||
- System's stated Player Fantasy section
|
||||
|
||||
**Prompt**:
|
||||
> "Review this system GDD for pillar alignment. Does every section serve the stated
|
||||
> pillars? Are there mechanics or rules that contradict or weaken a pillar? Does
|
||||
> the Player Fantasy section match the game's core fantasy? Return APPROVE, CONCERNS
|
||||
> [specific sections with issues], or REJECT [pillar violations that must be
|
||||
> redesigned before this system is implementable]."
|
||||
|
||||
**Verdicts**: APPROVE / CONCERNS / REJECT
|
||||
|
||||
---
|
||||
|
||||
### CD-SYSTEMS — Systems Decomposition Vision Check
|
||||
|
||||
**Trigger**: After the systems index is written by `/map-systems` — validates the
|
||||
complete system set before GDD authoring begins
|
||||
|
||||
**Context to pass**:
|
||||
- Systems index path (`design/gdd/systems-index.md`)
|
||||
- Game pillars and core fantasy (from `design/gdd/game-concept.md`)
|
||||
- Priority tier assignments (MVP / Vertical Slice / Alpha / Full Vision)
|
||||
- Any high-risk or bottleneck systems identified in the dependency map
|
||||
|
||||
**Prompt**:
|
||||
> "Review this systems decomposition against the game's design pillars. Does the
|
||||
> full set of MVP-tier systems collectively deliver the core fantasy? Are there
|
||||
> systems whose mechanics don't serve any stated pillar — indicating they may be
|
||||
> scope creep? Are there pillar-critical player experiences that have no system
|
||||
> assigned to deliver them? Are any systems missing that the core loop requires?
|
||||
> Return APPROVE (systems serve the vision), CONCERNS [specific gaps or
|
||||
> misalignments with their pillar implications], or REJECT [fundamental gaps —
|
||||
> the decomposition misses critical design intent and must be revised before GDD
|
||||
> authoring begins]."
|
||||
|
||||
**Verdicts**: APPROVE / CONCERNS / REJECT
|
||||
|
||||
---
|
||||
|
||||
### CD-NARRATIVE — Narrative Consistency Check
|
||||
|
||||
**Trigger**: After narrative GDDs, lore documents, dialogue specs, or world-building
|
||||
documents are authored (team-narrative, design-system for story systems, writer
|
||||
deliverables)
|
||||
|
||||
**Context to pass**:
|
||||
- Document file path(s)
|
||||
- Game pillars
|
||||
- Narrative direction brief or tone guide (if exists at `design/narrative/`)
|
||||
- Any existing lore that the new document references
|
||||
|
||||
**Prompt**:
|
||||
> "Review this narrative content for consistency with the game's pillars and
|
||||
> established world rules. Does the tone match the game's established voice? Are
|
||||
> there contradictions with existing lore or world-building? Does the content serve
|
||||
> the player experience pillar? Return APPROVE, CONCERNS [specific inconsistencies],
|
||||
> or REJECT [contradictions that break world coherence]."
|
||||
|
||||
**Verdicts**: APPROVE / CONCERNS / REJECT
|
||||
|
||||
---
|
||||
|
||||
### CD-PLAYTEST — Player Experience Validation
|
||||
|
||||
**Trigger**: After playtest reports are generated (`/playtest-report`), or after
|
||||
any session that produces player feedback
|
||||
|
||||
**Context to pass**:
|
||||
- Playtest report file path
|
||||
- Game pillars and core fantasy statement
|
||||
- The specific hypothesis being tested
|
||||
|
||||
**Prompt**:
|
||||
> "Review this playtest report against the game's design pillars and core fantasy.
|
||||
> Is the player experience matching the intended fantasy? Are there systematic issues
|
||||
> that represent pillar drift — mechanics that feel fine in isolation but undermine
|
||||
> the intended experience? Return APPROVE (core fantasy is landing), CONCERNS [gaps
|
||||
> between intended and actual experience], or REJECT [core fantasy is not present —
|
||||
> redesign needed before further playtesting]."
|
||||
|
||||
**Verdicts**: APPROVE / CONCERNS / REJECT
|
||||
|
||||
---
|
||||
|
||||
### CD-PHASE-GATE — Creative Readiness at Phase Transition
|
||||
|
||||
**Trigger**: Always at `/gate-check` — spawn in parallel with TD-PHASE-GATE and PR-PHASE-GATE
|
||||
|
||||
**Context to pass**:
|
||||
- Target phase name
|
||||
- List of all artifacts present (file paths)
|
||||
- Game pillars and core fantasy
|
||||
|
||||
**Prompt**:
|
||||
> "Review the current project state for [target phase] gate readiness from a
|
||||
> creative direction perspective. Are the game pillars faithfully represented in
|
||||
> all design artifacts? Does the current state preserve the core fantasy? Are there
|
||||
> any design decisions across GDDs or architecture that compromise the intended
|
||||
> player experience? Return READY, CONCERNS [list], or NOT READY [blockers]."
|
||||
|
||||
**Verdicts**: READY / CONCERNS / NOT READY
|
||||
|
||||
---
|
||||
|
||||
## Tier 1 — Technical Director Gates
|
||||
|
||||
Agent: `technical-director` | Model tier: Opus | Domain: Architecture, engine risk, performance
|
||||
|
||||
---
|
||||
|
||||
### TD-SYSTEM-BOUNDARY — System Boundary Architecture Review
|
||||
|
||||
**Trigger**: After `/map-systems` Phase 3 dependency mapping is agreed but before
|
||||
GDD authoring begins — validates that the system structure is architecturally
|
||||
sound before teams invest in writing GDDs against it
|
||||
|
||||
**Context to pass**:
|
||||
- Systems index path (or the dependency map summary if index not yet written)
|
||||
- Layer assignments (Foundation / Core / Feature / Presentation / Polish)
|
||||
- The full dependency graph (what each system depends on)
|
||||
- Any bottleneck systems flagged (many dependents)
|
||||
- Any circular dependencies found and their proposed resolutions
|
||||
|
||||
**Prompt**:
|
||||
> "Review this systems decomposition from an architectural perspective before GDD
|
||||
> authoring begins. Are the system boundaries clean — does each system own a
|
||||
> distinct concern with minimal overlap? Are there God Object risks (systems doing
|
||||
> too much)? Does the dependency ordering create implementation-sequencing problems?
|
||||
> Are there implicit shared-state problems in the proposed boundaries that will
|
||||
> cause tight coupling when implemented? Are any Foundation-layer systems actually
|
||||
> dependent on Feature-layer systems (inverted dependency)? Return APPROVE
|
||||
> (boundaries are architecturally sound — proceed to GDD authoring), CONCERNS
|
||||
> [specific boundary issues to address in the GDDs themselves], or REJECT
|
||||
> [fundamental boundary problems — the system structure will cause architectural
|
||||
> issues and must be restructured before any GDD is written]."
|
||||
|
||||
**Verdicts**: APPROVE / CONCERNS / REJECT
|
||||
|
||||
---
|
||||
|
||||
### TD-FEASIBILITY — Technical Feasibility Assessment
|
||||
|
||||
**Trigger**: After biggest technical risks are identified during scope/feasibility
|
||||
(brainstorm Phase 6, quick-design, or any early-stage concept with technical unknowns)
|
||||
|
||||
**Context to pass**:
|
||||
- Concept's core loop description
|
||||
- Platform target
|
||||
- Engine choice (or "undecided")
|
||||
- List of identified technical risks
|
||||
|
||||
**Prompt**:
|
||||
> "Review these technical risks for a [genre] game targeting [platform] using
|
||||
> [engine or 'undecided engine']. Flag any HIGH risk items that could invalidate
|
||||
> the concept as described, any risks that are engine-specific and should influence
|
||||
> the engine choice, and any risks that are commonly underestimated by solo
|
||||
> developers. Return VIABLE (risks are manageable), CONCERNS [list with mitigation
|
||||
> suggestions], or HIGH RISK [blockers that require concept or scope revision]."
|
||||
|
||||
**Verdicts**: VIABLE / CONCERNS / HIGH RISK
|
||||
|
||||
---
|
||||
|
||||
### TD-ARCHITECTURE — Architecture Sign-Off
|
||||
|
||||
**Trigger**: After the master architecture document is drafted (`/create-architecture`
|
||||
Phase 7), and after any major architecture revision
|
||||
|
||||
**Context to pass**:
|
||||
- Architecture document path (`docs/architecture/architecture.md`)
|
||||
- Technical requirements baseline (TR-IDs and count)
|
||||
- ADR list with statuses
|
||||
- Engine knowledge gap inventory
|
||||
|
||||
**Prompt**:
|
||||
> "Review this master architecture document for technical soundness. Check: (1) Is
|
||||
> every technical requirement from the baseline covered by an architectural decision?
|
||||
> (2) Are all HIGH risk engine domains explicitly addressed or flagged as open
|
||||
> questions? (3) Are the API boundaries clean, minimal, and implementable? (4) Are
|
||||
> Foundation layer ADR gaps resolved before implementation begins? Return APPROVE,
|
||||
> CONCERNS [list], or REJECT [blockers that must be resolved before coding starts]."
|
||||
|
||||
**Verdicts**: APPROVE / CONCERNS / REJECT
|
||||
|
||||
---
|
||||
|
||||
### TD-ADR — Architecture Decision Review
|
||||
|
||||
**Trigger**: After an individual ADR is authored (`/architecture-decision`), before
|
||||
it is marked Accepted
|
||||
|
||||
**Context to pass**:
|
||||
- ADR file path
|
||||
- Engine version and knowledge gap risk level for the domain
|
||||
- Related ADRs (if any)
|
||||
|
||||
**Prompt**:
|
||||
> "Review this Architecture Decision Record. Does it have a clear problem statement
|
||||
> and rationale? Are the rejected alternatives genuinely considered? Does the
|
||||
> Consequences section acknowledge the trade-offs honestly? Is the engine version
|
||||
> stamped? Are post-cutoff API risks flagged? Does it link to the GDD requirements
|
||||
> it covers? Return APPROVE, CONCERNS [specific gaps], or REJECT [the decision is
|
||||
> underspecified or makes unsound technical assumptions]."
|
||||
|
||||
**Verdicts**: APPROVE / CONCERNS / REJECT
|
||||
|
||||
---
|
||||
|
||||
### TD-ENGINE-RISK — Engine Version Risk Review
|
||||
|
||||
**Trigger**: When making architecture decisions that touch post-cutoff engine APIs,
|
||||
or before finalizing any engine-specific implementation approach
|
||||
|
||||
**Context to pass**:
|
||||
- The specific API or feature being used
|
||||
- Engine version and LLM knowledge cutoff (from `docs/engine-reference/[engine]/VERSION.md`)
|
||||
- Relevant excerpt from breaking-changes or deprecated-apis docs
|
||||
|
||||
**Prompt**:
|
||||
> "Review this engine API usage against the version reference. Is this API present
|
||||
> in [engine version]? Has its signature, behaviour, or namespace changed since the
|
||||
> LLM knowledge cutoff? Are there known deprecations or post-cutoff alternatives?
|
||||
> Return APPROVE (safe to use as described), CONCERNS [verify before implementing],
|
||||
> or REJECT [API has changed — provide corrected approach]."
|
||||
|
||||
**Verdicts**: APPROVE / CONCERNS / REJECT
|
||||
|
||||
---
|
||||
|
||||
### TD-PHASE-GATE — Technical Readiness at Phase Transition
|
||||
|
||||
**Trigger**: Always at `/gate-check` — spawn in parallel with CD-PHASE-GATE and PR-PHASE-GATE
|
||||
|
||||
**Context to pass**:
|
||||
- Target phase name
|
||||
- Architecture document path (if exists)
|
||||
- Engine reference path
|
||||
- ADR list
|
||||
|
||||
**Prompt**:
|
||||
> "Review the current project state for [target phase] gate readiness from a
|
||||
> technical direction perspective. Is the architecture sound for this phase? Are
|
||||
> all high-risk engine domains addressed? Are performance budgets realistic and
|
||||
> documented? Are Foundation-layer decisions complete enough to begin implementation?
|
||||
> Return READY, CONCERNS [list], or NOT READY [blockers]."
|
||||
|
||||
**Verdicts**: READY / CONCERNS / NOT READY
|
||||
|
||||
---
|
||||
|
||||
## Tier 1 — Producer Gates
|
||||
|
||||
Agent: `producer` | Model tier: Opus | Domain: Scope, timeline, dependencies, production risk
|
||||
|
||||
---
|
||||
|
||||
### PR-SCOPE — Scope and Timeline Validation
|
||||
|
||||
**Trigger**: After scope tiers are defined (brainstorm Phase 6, quick-design, or
|
||||
any workflow that produces an MVP definition and timeline estimate)
|
||||
|
||||
**Context to pass**:
|
||||
- Full vision scope description
|
||||
- MVP definition
|
||||
- Timeline estimate
|
||||
- Team size (solo / small team / etc.)
|
||||
- Scope tiers (what ships if time runs out)
|
||||
|
||||
**Prompt**:
|
||||
> "Review this scope estimate. Is the MVP achievable in the stated timeline for
|
||||
> the stated team size? Are the scope tiers correctly ordered by risk — does each
|
||||
> tier deliver a shippable product if work stops there? What is the most likely
|
||||
> cut point under time pressure, and is it a graceful fallback or a broken product?
|
||||
> Return REALISTIC (scope matches capacity), OPTIMISTIC [specific adjustments
|
||||
> recommended], or UNREALISTIC [blockers — timeline or MVP must be revised]."
|
||||
|
||||
**Verdicts**: REALISTIC / OPTIMISTIC / UNREALISTIC
|
||||
|
||||
---
|
||||
|
||||
### PR-SPRINT — Sprint Feasibility Review
|
||||
|
||||
**Trigger**: Before finalising a sprint plan (`/sprint-plan`), and after any
|
||||
mid-sprint scope change
|
||||
|
||||
**Context to pass**:
|
||||
- Proposed sprint story list (titles, estimates, dependencies)
|
||||
- Team capacity (hours available)
|
||||
- Current sprint backlog debt (if any)
|
||||
- Milestone constraints
|
||||
|
||||
**Prompt**:
|
||||
> "Review this sprint plan for feasibility. Is the story load realistic for the
|
||||
> available capacity? Are stories correctly ordered by dependency? Are there hidden
|
||||
> dependencies between stories that could block the sprint mid-way? Are any stories
|
||||
> underestimated given their technical complexity? Return REALISTIC (plan is
|
||||
> achievable), CONCERNS [specific risks], or UNREALISTIC [sprint must be
|
||||
> descoped — identify which stories to defer]."
|
||||
|
||||
**Verdicts**: REALISTIC / CONCERNS / UNREALISTIC
|
||||
|
||||
---
|
||||
|
||||
### PR-MILESTONE — Milestone Risk Assessment
|
||||
|
||||
**Trigger**: At milestone review (`/milestone-review`), at mid-sprint retrospectives,
|
||||
or when a scope change is proposed that affects the milestone
|
||||
|
||||
**Context to pass**:
|
||||
- Milestone definition and target date
|
||||
- Current completion percentage
|
||||
- Blocked stories count
|
||||
- Sprint velocity data (if available)
|
||||
|
||||
**Prompt**:
|
||||
> "Review this milestone status. Based on current velocity and blocked story count,
|
||||
> will this milestone hit its target date? What are the top 3 production risks
|
||||
> between now and the milestone? Are there scope items that should be cut to protect
|
||||
> the milestone date vs. items that are non-negotiable? Return ON TRACK, AT RISK
|
||||
> [specific mitigations], or OFF TRACK [date must slip or scope must cut — provide
|
||||
> both options]."
|
||||
|
||||
**Verdicts**: ON TRACK / AT RISK / OFF TRACK
|
||||
|
||||
---
|
||||
|
||||
### PR-EPIC — Epic Structure Feasibility Review
|
||||
|
||||
**Trigger**: After epics are defined by `/create-epics`, before stories are
|
||||
broken out — validates the epic structure is producible before `/create-stories`
|
||||
is invoked
|
||||
|
||||
**Context to pass**:
|
||||
- Epic definition file paths (all epics just created)
|
||||
- Epic index path (`production/epics/index.md`)
|
||||
- Milestone timeline and target dates
|
||||
- Team capacity (solo / small team / size)
|
||||
- Layer being epiced (Foundation / Core / Feature / etc.)
|
||||
|
||||
**Prompt**:
|
||||
> "Review this epic structure for production feasibility before story breakdown
|
||||
> begins. Are the epic boundaries scoped appropriately — could each epic realistically
|
||||
> complete before a milestone deadline? Are epics correctly ordered by system
|
||||
> dependency — does any epic require another epic's output before it can start?
|
||||
> Are any epics underscoped (too small, should merge) or overscoped (too large,
|
||||
> should split into 2-3 focused epics)? Are the Foundation-layer epics scoped to
|
||||
> allow Core-layer epics to begin at the start of the next sprint after Foundation
|
||||
> completes? Return REALISTIC (epic structure is producible), CONCERNS [specific
|
||||
> structural adjustments before stories are written], or UNREALISTIC [epics must
|
||||
> be split, merged, or reordered — story breakdown cannot begin until resolved]."
|
||||
|
||||
**Verdicts**: REALISTIC / CONCERNS / UNREALISTIC
|
||||
|
||||
---
|
||||
|
||||
### PR-PHASE-GATE — Production Readiness at Phase Transition
|
||||
|
||||
**Trigger**: Always at `/gate-check` — spawn in parallel with CD-PHASE-GATE and TD-PHASE-GATE
|
||||
|
||||
**Context to pass**:
|
||||
- Target phase name
|
||||
- Sprint and milestone artifacts present
|
||||
- Team size and capacity
|
||||
- Current blocked story count
|
||||
|
||||
**Prompt**:
|
||||
> "Review the current project state for [target phase] gate readiness from a
|
||||
> production perspective. Is the scope realistic for the stated timeline and team
|
||||
> size? Are dependencies properly ordered so the team can actually execute in
|
||||
> sequence? Are there milestone or sprint risks that could derail the phase within
|
||||
> the first two sprints? Return READY, CONCERNS [list], or NOT READY [blockers]."
|
||||
|
||||
**Verdicts**: READY / CONCERNS / NOT READY
|
||||
|
||||
---
|
||||
|
||||
## Tier 2 — Lead Gates
|
||||
|
||||
These gates are invoked by orchestration skills and senior skills when a domain
|
||||
specialist's feasibility sign-off is needed. Tier 2 leads use Sonnet (default).
|
||||
|
||||
---
|
||||
|
||||
### LP-FEASIBILITY — Lead Programmer Implementation Feasibility
|
||||
|
||||
**Trigger**: After the master architecture document is written (`/create-architecture`
|
||||
Phase 7b), or when a new architectural pattern is proposed
|
||||
|
||||
**Context to pass**:
|
||||
- Architecture document path
|
||||
- Technical requirements baseline summary
|
||||
- ADR list with statuses
|
||||
|
||||
**Prompt**:
|
||||
> "Review this architecture for implementation feasibility. Flag: (a) any decisions
|
||||
> that would be difficult or impossible to implement with the stated engine and
|
||||
> language, (b) any missing interface definitions that programmers would need to
|
||||
> invent themselves, (c) any patterns that create avoidable technical debt or
|
||||
> that contradict standard [engine] idioms. Return FEASIBLE, CONCERNS [list], or
|
||||
> INFEASIBLE [blockers that make this architecture unimplementable as written]."
|
||||
|
||||
**Verdicts**: FEASIBLE / CONCERNS / INFEASIBLE
|
||||
|
||||
---
|
||||
|
||||
### LP-CODE-REVIEW — Lead Programmer Code Review
|
||||
|
||||
**Trigger**: After a dev story is implemented (`/dev-story`, `/story-done`), or
|
||||
as part of `/code-review`
|
||||
|
||||
**Context to pass**:
|
||||
- Implementation file paths
|
||||
- Story file path (for acceptance criteria)
|
||||
- Relevant GDD section
|
||||
- ADR that governs this system
|
||||
|
||||
**Prompt**:
|
||||
> "Review this implementation against the story acceptance criteria and governing
|
||||
> ADR. Does the code match the architecture boundary definitions? Are there
|
||||
> violations of the coding standards or forbidden patterns? Is the public API
|
||||
> testable and documented? Are there any correctness issues against the GDD rules?
|
||||
> Return APPROVE, CONCERNS [specific issues], or REJECT [must be revised before merge]."
|
||||
|
||||
**Verdicts**: APPROVE / CONCERNS / REJECT
|
||||
|
||||
---
|
||||
|
||||
### QL-STORY-READY — QA Lead Story Readiness Check
|
||||
|
||||
**Trigger**: Before a story is accepted into a sprint — invoked by `/create-stories`,
|
||||
`/story-readiness`, and `/sprint-plan` during story selection
|
||||
|
||||
**Context to pass**:
|
||||
- Story file path
|
||||
- Story type (Logic / Integration / Visual/Feel / UI / Config/Data)
|
||||
- Acceptance criteria list (verbatim from the story)
|
||||
- The GDD requirement (TR-ID and text) the story covers
|
||||
|
||||
**Prompt**:
|
||||
> "Review this story's acceptance criteria for testability before it enters the
|
||||
> sprint. Are all criteria specific enough that a developer would know unambiguously
|
||||
> when they are done? For Logic-type stories: can every criterion be verified with
|
||||
> an automated test? For Integration stories: is each criterion observable in a
|
||||
> controlled test environment? Flag criteria that are too vague to implement
|
||||
> against, and flag criteria that require a full game build to test (mark these
|
||||
> DEFERRED, not BLOCKED). Return ADEQUATE (criteria are implementable as written),
|
||||
> GAPS [specific criteria needing refinement], or INADEQUATE [criteria are too
|
||||
> vague — story must be revised before sprint inclusion]."
|
||||
|
||||
**Verdicts**: ADEQUATE / GAPS / INADEQUATE
|
||||
|
||||
---
|
||||
|
||||
### QL-TEST-COVERAGE — QA Lead Test Coverage Review
|
||||
|
||||
**Trigger**: After implementation stories are complete, before marking an epic
|
||||
done, or at `/gate-check` Production → Polish
|
||||
|
||||
**Context to pass**:
|
||||
- List of implemented stories with story types (Logic / Integration / Visual / UI / Config)
|
||||
- Test file paths in `tests/`
|
||||
- GDD acceptance criteria for the system
|
||||
|
||||
**Prompt**:
|
||||
> "Review the test coverage for these implementation stories. Are all Logic stories
|
||||
> covered by passing unit tests? Are Integration stories covered by integration
|
||||
> tests or documented playtests? Are the GDD acceptance criteria each mapped to at
|
||||
> least one test? Are there untested edge cases from the GDD Edge Cases section?
|
||||
> Return ADEQUATE (coverage meets standards), GAPS [specific missing tests], or
|
||||
> INADEQUATE [critical logic is untested — do not advance]."
|
||||
|
||||
**Verdicts**: ADEQUATE / GAPS / INADEQUATE
|
||||
|
||||
---
|
||||
|
||||
### ND-CONSISTENCY — Narrative Director Consistency Check
|
||||
|
||||
**Trigger**: After writer deliverables (dialogue, lore, item descriptions) are
|
||||
authored, or when a design decision has narrative implications
|
||||
|
||||
**Context to pass**:
|
||||
- Document or content file path(s)
|
||||
- Narrative bible or tone guide path (if exists)
|
||||
- Relevant world-building rules
|
||||
- Character or faction profiles affected
|
||||
|
||||
**Prompt**:
|
||||
> "Review this narrative content for internal consistency and adherence to
|
||||
> established world rules. Are character voices consistent with their established
|
||||
> profiles? Does the lore contradict any established facts? Is the tone consistent
|
||||
> with the game's narrative direction? Return APPROVE, CONCERNS [specific
|
||||
> inconsistencies to fix], or REJECT [contradictions that break the narrative
|
||||
> foundation]."
|
||||
|
||||
**Verdicts**: APPROVE / CONCERNS / REJECT
|
||||
|
||||
---
|
||||
|
||||
### AD-VISUAL — Art Director Visual Consistency Review
|
||||
|
||||
**Trigger**: After art direction decisions are made, when new asset types are
|
||||
introduced, or when a tech art decision affects visual style
|
||||
|
||||
**Context to pass**:
|
||||
- Art bible path (if exists at `design/art-bible.md`)
|
||||
- The specific asset type, style decision, or visual direction being reviewed
|
||||
- Reference images or style descriptions
|
||||
- Platform and performance constraints
|
||||
|
||||
**Prompt**:
|
||||
> "Review this visual direction decision for consistency with the established art
|
||||
> style and production constraints. Does it match the art bible? Is it achievable
|
||||
> within the platform's performance budget? Are there asset pipeline implications
|
||||
> that create technical risk? Return APPROVE, CONCERNS [specific adjustments], or
|
||||
> REJECT [style violation or production risk that must be resolved first]."
|
||||
|
||||
**Verdicts**: APPROVE / CONCERNS / REJECT
|
||||
|
||||
---
|
||||
|
||||
## Parallel Gate Protocol
|
||||
|
||||
When a workflow requires multiple directors at the same checkpoint (most common
|
||||
at `/gate-check`), spawn all agents simultaneously:
|
||||
|
||||
```
|
||||
Spawn in parallel (issue all Task calls before waiting for any result):
|
||||
1. creative-director → gate CD-PHASE-GATE
|
||||
2. technical-director → gate TD-PHASE-GATE
|
||||
3. producer → gate PR-PHASE-GATE
|
||||
|
||||
Collect all three verdicts, then apply escalation rules:
|
||||
- Any NOT READY / REJECT → overall verdict minimum FAIL
|
||||
- Any CONCERNS → overall verdict minimum CONCERNS
|
||||
- All READY / APPROVE → eligible for PASS (still subject to artifact checks)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Adding New Gates
|
||||
|
||||
When a new gate is needed for a new skill or workflow:
|
||||
|
||||
1. Assign a gate ID: `[DIRECTOR-PREFIX]-[DESCRIPTIVE-SLUG]`
|
||||
- Prefixes: `CD-` `TD-` `PR-` `LP-` `QL-` `ND-` `AD-`
|
||||
- Add new prefixes for new agents: `AudioDirector` → `AU-`, `UX` → `UX-`
|
||||
2. Add the gate under the appropriate director section with all five fields:
|
||||
Trigger, Context to pass, Prompt, Verdicts, and any special handling notes
|
||||
3. Reference it in skills by ID only — never copy the prompt text into the skill
|
||||
|
||||
---
|
||||
|
||||
## Gate Coverage by Stage
|
||||
|
||||
| Stage | Required Gates | Optional Gates |
|
||||
|-------|---------------|----------------|
|
||||
| **Concept** | CD-PILLARS | TD-FEASIBILITY, PR-SCOPE |
|
||||
| **Systems Design** | TD-SYSTEM-BOUNDARY, CD-SYSTEMS, PR-SCOPE, CD-GDD-ALIGN (per GDD) | ND-CONSISTENCY, AD-VISUAL |
|
||||
| **Technical Setup** | TD-ARCHITECTURE, TD-ADR (per ADR), LP-FEASIBILITY | TD-ENGINE-RISK |
|
||||
| **Pre-Production** | PR-EPIC, QL-STORY-READY (per story), PR-SPRINT, all three PHASE-GATE (via gate-check) | CD-PLAYTEST |
|
||||
| **Production** | LP-CODE-REVIEW (per story), QL-STORY-READY, PR-SPRINT (per sprint) | PR-MILESTONE, QL-TEST-COVERAGE |
|
||||
| **Polish** | QL-TEST-COVERAGE, CD-PLAYTEST, PR-MILESTONE | |
|
||||
| **Release** | All three PHASE-GATE (via gate-check) | QL-TEST-COVERAGE |
|
||||
|
|
@ -309,6 +309,11 @@ to implement it.]
|
|||
- If the specialist identifies a **blocking issue** (wrong API, deprecated approach, engine version incompatibility): revise the Decision and Engine Compatibility sections accordingly, then confirm the changes with the user before proceeding
|
||||
- If the specialist finds **minor notes** only: incorporate them into the ADR's Risks subsection
|
||||
|
||||
4.6. **Technical Director Strategic Review** — After the engine specialist validation, spawn `technical-director` via Task using gate **TD-ADR** (`.claude/docs/director-gates.md`):
|
||||
- Pass: the ADR file path (or draft content), engine version, domain, any existing ADRs in the same domain
|
||||
- The TD validates architectural coherence (is this decision consistent with the whole system?) — distinct from the engine specialist's API-level check
|
||||
- If CONCERNS or REJECT: revise the Decision or Alternatives sections accordingly before proceeding
|
||||
|
||||
5. Ask: "May I write this ADR to `docs/architecture/adr-[NNNN]-[slug].md`?"
|
||||
|
||||
If yes, write the file, creating the directory if needed.
|
||||
|
|
|
|||
|
|
@ -51,15 +51,19 @@ conversationally (not as a checklist):
|
|||
|
||||
**Taste profile**:
|
||||
- What 3 games have you spent the most time with? What kept you coming back?
|
||||
*(Ask this as plain text — the user must be able to type specific game names freely.
|
||||
Do NOT put this in an AskUserQuestion with preset options.)*
|
||||
- Are there genres you love? Genres you avoid? Why?
|
||||
- Do you prefer games that challenge you, relax you, tell you stories,
|
||||
or let you express yourself?
|
||||
or let you express yourself? *(Use `AskUserQuestion` for this — constrained choice.)*
|
||||
|
||||
**Practical constraints** (shape the sandbox before brainstorming):
|
||||
- Solo developer or team? What skills are available?
|
||||
- Timeline: weeks, months, or years?
|
||||
- Any platform constraints? (PC only? Mobile? Console?)
|
||||
- First game or experienced developer?
|
||||
**Practical constraints** (shape the sandbox before brainstorming).
|
||||
Bundle these into a single multi-tab `AskUserQuestion` with these exact tab labels:
|
||||
- Tab "Experience" — "What kind of experience do you most want players to have?" (Challenge & Mastery / Story & Discovery / Expression & Creativity / Relaxation & Flow)
|
||||
- Tab "Timeline" — "What's your realistic development timeline?" (Weeks / Months / 1-2 years / Multi-year)
|
||||
- Tab "Dev level" — "Where are you in your dev journey?" (First game / Shipped before / Professional background)
|
||||
|
||||
Use exactly these tab names — do not rename or duplicate them.
|
||||
|
||||
**Synthesize** the answers into a **Creative Brief** — a 3-5 sentence
|
||||
summary of the person's emotional goals, taste profile, and constraints.
|
||||
|
|
@ -97,8 +101,12 @@ For each concept, present:
|
|||
- **Why It Could Work** (1 sentence on market/audience fit)
|
||||
- **Biggest Risk** (1 sentence on the hardest unanswered question)
|
||||
|
||||
Present all three. Ask the user to pick one, combine elements, or request
|
||||
new concepts. Never pressure toward a choice — let them sit with it.
|
||||
Present all three. Then use `AskUserQuestion` to capture the selection:
|
||||
- **Use a single-list call — NO tabs, just `prompt` and `options`. Do not use a tabbed form here.**
|
||||
- **Prompt**: "Which concept resonates with you? You can pick one, combine elements, or ask for fresh directions."
|
||||
- **Options**: one option per concept (e.g., `Concept 1 — SCAR`), plus `Combine elements across concepts` and `Generate fresh directions`
|
||||
|
||||
Never pressure toward a choice — let them sit with it.
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -109,11 +117,14 @@ The core loop is the beating heart of the game — if it isn't fun in
|
|||
isolation, no amount of content or polish will save the game.
|
||||
|
||||
**30-Second Loop** (moment-to-moment):
|
||||
- What is the player physically doing most often?
|
||||
- Is this action intrinsically satisfying? (Would they do it with no
|
||||
rewards, no progression, no story — just for the feel of it?)
|
||||
- What makes this action feel good? (Audio feedback, visual juice,
|
||||
timing satisfaction, tactical depth?)
|
||||
|
||||
Ask these as `AskUserQuestion` calls — derive the options from the chosen concept, don't hardcode them:
|
||||
|
||||
1. **Core action feel** — prompt: "What's the primary feel of the core action?" Generate 3-4 options that fit the concept's genre and tone, plus a free-text escape (`I'll describe it`).
|
||||
|
||||
2. **Key design dimension** — identify the most important design variable for this specific concept (e.g., world reactivity, pacing, player agency) and ask about it. Generate options that match the concept. Always include a free-text escape.
|
||||
|
||||
After capturing answers, analyze: Is this action intrinsically satisfying? What makes it feel good? (Audio feedback, visual juice, timing satisfaction, tactical depth?)
|
||||
|
||||
**5-Minute Loop** (short-term goals):
|
||||
- What structures the moment-to-moment play into cycles?
|
||||
|
|
@ -156,6 +167,12 @@ Then define **3+ anti-pillars** (what this game is NOT):
|
|||
be cool if..." features that don't serve the core vision
|
||||
- Frame as: "We will NOT do [thing] because it would compromise [pillar]"
|
||||
|
||||
**After pillars and anti-pillars are agreed, spawn `creative-director` via Task using gate CD-PILLARS (`.claude/docs/director-gates.md`) before moving to Phase 5.**
|
||||
|
||||
Pass: full pillar set with design tests, anti-pillars, core fantasy, unique hook.
|
||||
|
||||
Present the feedback to the user. If CONCERNS or REJECT, offer to revise specific pillars before moving on. If APPROVE, note the approval and continue.
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: Player Type Validation
|
||||
|
|
@ -177,8 +194,15 @@ who this game is actually for:
|
|||
|
||||
Ground the concept in reality:
|
||||
|
||||
- **Engine recommendation** (Godot / Unity / Unreal) with reasoning based
|
||||
on concept needs, team expertise, and platform targets
|
||||
- **Target platform**: Use `AskUserQuestion` — "What platforms are you targeting for this game?"
|
||||
Options: `PC (Steam / Epic)` / `Mobile (iOS / Android)` / `Console` / `Web / Browser` / `Multiple platforms`
|
||||
Record the answer — it directly shapes the engine recommendation and will be passed to `/setup-engine`.
|
||||
Note platform implications if relevant (e.g., mobile means Unity is strongly preferred; console means Godot has limitations; web means Godot exports cleanly).
|
||||
|
||||
- **Engine experience**: Use `AskUserQuestion` — "Do you already have an engine you work in?"
|
||||
Options: `Godot` / `Unity` / `Unreal Engine 5` / `No preference — help me decide`
|
||||
- If they pick an engine → record it as their preference and move on. Do NOT second-guess it.
|
||||
- If "No preference" → tell them: "Run `/setup-engine` after this session — it will walk you through the full decision based on your concept and platform target." Do not make a recommendation here.
|
||||
- **Art pipeline**: What's the art style and how labor-intensive is it?
|
||||
- **Content scope**: Estimate level/area count, item count, gameplay hours
|
||||
- **MVP definition**: What's the absolute minimum build that tests "is the
|
||||
|
|
@ -186,6 +210,18 @@ Ground the concept in reality:
|
|||
- **Biggest risks**: Technical risks, design risks, market risks
|
||||
- **Scope tiers**: What's the full vision vs. what ships if time runs out?
|
||||
|
||||
**After identifying biggest technical risks, spawn `technical-director` via Task using gate TD-FEASIBILITY (`.claude/docs/director-gates.md`) before scope tiers are defined.**
|
||||
|
||||
Pass: core loop description, platform target, engine choice (or "undecided"), list of identified technical risks.
|
||||
|
||||
Present the assessment to the user. If HIGH RISK, offer to revisit scope before finalising. If CONCERNS, note them and continue.
|
||||
|
||||
**After scope tiers are defined, spawn `producer` via Task using gate PR-SCOPE (`.claude/docs/director-gates.md`).**
|
||||
|
||||
Pass: full vision scope, MVP definition, timeline estimate, team size.
|
||||
|
||||
Present the assessment to the user. If UNREALISTIC, offer to adjust the MVP definition or scope tiers before writing the document.
|
||||
|
||||
---
|
||||
|
||||
4. **Generate the game concept document** using the template at
|
||||
|
|
@ -197,16 +233,29 @@ Ground the concept in reality:
|
|||
|
||||
If yes, generate the document using the template at `.claude/docs/templates/game-concept.md`, fill in ALL sections from the brainstorm conversation, and write the file, creating directories as needed.
|
||||
|
||||
If no:
|
||||
- If the user already named a section to change, revise it directly — do not ask again which section.
|
||||
- If the user said no without specifying what to change, use `AskUserQuestion` — "Which section would you like to revise?"
|
||||
Options: `Elevator Pitch` / `Core Fantasy & Unique Hook` / `Pillars` / `Core Loop` / `MVP Definition` / `Scope Tiers` / `Risks` / `Something else — I'll describe`
|
||||
|
||||
After revising, show the updated section as a diff or clear before/after, then use `AskUserQuestion` — "Ready to write the updated concept document?"
|
||||
Options: `Yes — write it` / `Revise another section`
|
||||
Repeat until the user approves the write.
|
||||
|
||||
**Scope consistency rule**: The "Estimated Scope" field in the Core Identity table must match the full-vision timeline from the Scope Tiers section — not just say "Large (9+ months)". Write it as "Large (X–Y months, solo)" or "Large (X–Y months, team of N)" so the summary table is accurate.
|
||||
|
||||
6. **Suggest next steps** (in this order — this is the professional studio
|
||||
pre-production pipeline):
|
||||
- "Run `/setup-engine [engine] [version]` to configure the engine and populate version-aware reference docs"
|
||||
- "Use `/design-review design/gdd/game-concept.md` to validate completeness"
|
||||
- "Discuss vision with the `creative-director` agent for pillar refinement"
|
||||
- "Decompose the concept into individual systems with `/map-systems` — maps dependencies, assigns priorities, and creates the systems index"
|
||||
- "Author per-system GDDs with `/design-system` — guided, section-by-section GDD writing"
|
||||
- "Prototype the core loop with `/prototype [core-mechanic]`"
|
||||
- "Playtest the prototype with `/playtest-report` to validate the hypothesis"
|
||||
- "If validated, plan the first sprint with `/sprint-plan new`"
|
||||
pre-production pipeline). List ALL steps — do not abbreviate or truncate:
|
||||
1. "Run `/setup-engine` to configure the engine and populate version-aware reference docs"
|
||||
2. "Use `/design-review design/gdd/game-concept.md` to validate concept completeness before going downstream"
|
||||
3. "Discuss vision with the `creative-director` agent for pillar refinement"
|
||||
4. "Decompose the concept into individual systems with `/map-systems` — maps dependencies, assigns priorities, and creates the systems index"
|
||||
5. "Author per-system GDDs with `/design-system` — guided, section-by-section GDD writing for each system identified in step 4"
|
||||
6. "Plan the technical architecture with `/create-architecture` — defines how all systems fit together and connect"
|
||||
7. "Validate readiness to advance with `/gate-check` — phase gate before committing to production"
|
||||
8. "Prototype the riskiest system with `/prototype [core-mechanic]` — validate the core loop before full implementation"
|
||||
9. "Run `/playtest-report` after the prototype to validate the core hypothesis"
|
||||
10. "If validated, plan the first sprint with `/sprint-plan new`"
|
||||
|
||||
7. **Output a summary** with the chosen concept's elevator pitch, pillars,
|
||||
primary player type, engine recommendation, biggest risk, and file path.
|
||||
|
|
|
|||
|
|
@ -325,6 +325,37 @@ derived from the game concept, GDDs, and technical preferences]
|
|||
|
||||
---
|
||||
|
||||
## Phase 7b: Technical Director Sign-Off + Lead Programmer Feasibility Review
|
||||
|
||||
After writing the master architecture document, perform an explicit sign-off before handoff.
|
||||
|
||||
**Step 1 — Technical Director self-review** (this skill runs as technical-director):
|
||||
|
||||
Apply gate **TD-ARCHITECTURE** (`.claude/docs/director-gates.md`) as a self-review. Check all four criteria from that gate definition against the completed document.
|
||||
|
||||
**Step 2 — Spawn `lead-programmer` via Task using gate LP-FEASIBILITY (`.claude/docs/director-gates.md`):**
|
||||
|
||||
Pass: architecture document path, technical requirements baseline summary, ADR list.
|
||||
|
||||
**Step 3 — Present both assessments to the user:**
|
||||
|
||||
Show the Technical Director assessment and Lead Programmer verdict side by side.
|
||||
|
||||
Use `AskUserQuestion` — "Technical Director and Lead Programmer have reviewed the architecture. How would you like to proceed?"
|
||||
Options: `Accept — proceed to handoff` / `Revise flagged items first` / `Discuss specific concerns`
|
||||
|
||||
**Step 4 — Record sign-off in the architecture document:**
|
||||
|
||||
Update the Document Status section:
|
||||
```
|
||||
- Technical Director Sign-Off: [date] — APPROVED / APPROVED WITH CONDITIONS
|
||||
- Lead Programmer Feasibility: FEASIBLE / CONCERNS ACCEPTED / REVISED
|
||||
```
|
||||
|
||||
Ask: "May I update the Document Status section in `docs/architecture/architecture.md` with the sign-off?"
|
||||
|
||||
---
|
||||
|
||||
## Phase 8: Handoff
|
||||
|
||||
After writing the document, provide a clear handoff:
|
||||
|
|
|
|||
|
|
@ -112,6 +112,16 @@ Options: "Yes, create it", "Skip", "Pause — I need to write ADRs first"
|
|||
|
||||
---
|
||||
|
||||
## 4b. Producer Epic Structure Gate
|
||||
|
||||
After all epics for the current layer are defined (Step 4 completed for all in-scope systems), and before writing any files, spawn `producer` via Task using gate **PR-EPIC** (`.claude/docs/director-gates.md`).
|
||||
|
||||
Pass: the full epic structure summary (all epics, their scope summaries, governing ADR counts), the layer being processed, milestone timeline and team capacity.
|
||||
|
||||
Present the producer's assessment. If UNREALISTIC, offer to revise epic boundaries (split overscoped or merge underscoped epics) before writing. If CONCERNS, surface them and let the user decide. Do not write epic files until the producer gate resolves.
|
||||
|
||||
---
|
||||
|
||||
## 5. Write Epic Files
|
||||
|
||||
After approval, ask: "May I write the epic file to `production/epics/[epic-slug]/EPIC.md`?"
|
||||
|
|
|
|||
|
|
@ -87,6 +87,16 @@ For each story, determine:
|
|||
|
||||
---
|
||||
|
||||
## 4b. QA Lead Story Readiness Gate
|
||||
|
||||
After decomposing all stories (Step 4 complete) but before presenting them for write approval, spawn `qa-lead` via Task using gate **QL-STORY-READY** (`.claude/docs/director-gates.md`).
|
||||
|
||||
Pass: the full story list with acceptance criteria, story types, and TR-IDs; the epic's GDD acceptance criteria for reference.
|
||||
|
||||
Present the QA lead's assessment. For each story flagged as GAPS or INADEQUATE, revise the acceptance criteria before proceeding — stories with untestable criteria cannot be implemented correctly. Once all stories reach ADEQUATE, proceed to Step 5.
|
||||
|
||||
---
|
||||
|
||||
## 5. Present Stories for Review
|
||||
|
||||
Before writing any files, present the full story list:
|
||||
|
|
|
|||
|
|
@ -557,6 +557,17 @@ the source of truth). Verify:
|
|||
- Dependencies are listed with interfaces
|
||||
- Acceptance criteria are testable
|
||||
|
||||
### 5a-bis: Creative Director Pillar Review
|
||||
|
||||
Before finalizing the GDD, spawn `creative-director` via Task using gate **CD-GDD-ALIGN** (`.claude/docs/director-gates.md`).
|
||||
|
||||
Pass: completed GDD file path, game pillars (from `design/gdd/game-concept.md` or `design/gdd/game-pillars.md`), MDA aesthetics target.
|
||||
|
||||
Handle verdict per the standard rules in `director-gates.md`. After resolution, record the verdict in the GDD Status header:
|
||||
`> **Creative Director Review (CD-GDD-ALIGN)**: APPROVED [date] / CONCERNS (accepted) [date] / REVISED [date]`
|
||||
|
||||
---
|
||||
|
||||
### 5b: Update Entity Registry
|
||||
|
||||
Scan the completed GDD for cross-system facts that should be registered:
|
||||
|
|
|
|||
|
|
@ -257,6 +257,40 @@ For items that can't be automatically verified, **ask the user**:
|
|||
|
||||
---
|
||||
|
||||
## 4b. Director Panel Assessment
|
||||
|
||||
Before generating the final verdict, spawn all three directors as **parallel subagents** via Task using the parallel gate protocol from `.claude/docs/director-gates.md`. Issue all three Task calls simultaneously — do not wait for one before starting the next.
|
||||
|
||||
**Spawn in parallel:**
|
||||
|
||||
1. **`creative-director`** — gate **CD-PHASE-GATE** (`.claude/docs/director-gates.md`)
|
||||
2. **`technical-director`** — gate **TD-PHASE-GATE** (`.claude/docs/director-gates.md`)
|
||||
3. **`producer`** — gate **PR-PHASE-GATE** (`.claude/docs/director-gates.md`)
|
||||
|
||||
Pass to each: target phase name, list of artifacts present, and the context fields listed in that gate's definition.
|
||||
|
||||
**Collect all three responses, then present the Director Panel summary:**
|
||||
|
||||
```
|
||||
## Director Panel Assessment
|
||||
|
||||
Creative Director: [READY / CONCERNS / NOT READY]
|
||||
[feedback]
|
||||
|
||||
Technical Director: [READY / CONCERNS / NOT READY]
|
||||
[feedback]
|
||||
|
||||
Producer: [READY / CONCERNS / NOT READY]
|
||||
[feedback]
|
||||
```
|
||||
|
||||
**Apply to the verdict:**
|
||||
- Any director returns NOT READY → verdict is minimum FAIL (user may override with explicit acknowledgement)
|
||||
- Any director returns CONCERNS → verdict is minimum CONCERNS
|
||||
- All three READY → eligible for PASS (still subject to artifact and quality checks from Section 3)
|
||||
|
||||
---
|
||||
|
||||
## 5. Output the Verdict
|
||||
|
||||
```
|
||||
|
|
|
|||
|
|
@ -137,6 +137,12 @@ Show the dependency map as a layered list. Highlight:
|
|||
Use `AskUserQuestion` to ask: "Does this dependency ordering look right? Any
|
||||
dependencies I'm missing or that should be removed?"
|
||||
|
||||
**After dependency mapping is approved, spawn `technical-director` via Task using gate TD-SYSTEM-BOUNDARY (`.claude/docs/director-gates.md`) before proceeding to priority assignment.**
|
||||
|
||||
Pass: the dependency map summary, layer assignments, bottleneck systems list, any circular dependency resolutions.
|
||||
|
||||
Present the assessment. If REJECT, revise the system boundaries with the user before moving to priority assignment. If CONCERNS, note them inline in the systems index and continue.
|
||||
|
||||
---
|
||||
|
||||
## 5. Phase 4: Priority Assignment (Collaborative)
|
||||
|
|
@ -163,6 +169,12 @@ Which systems should be higher or lower priority?"
|
|||
Explain reasoning in conversation: "I placed [system] in MVP because the core loop
|
||||
requires it — without [system], the 30-second loop can't function."
|
||||
|
||||
**After priorities are approved, spawn `producer` via Task using gate PR-SCOPE (`.claude/docs/director-gates.md`) before writing the index.**
|
||||
|
||||
Pass: total system count per milestone tier, estimated implementation volume per tier (system count × average complexity), team size, stated project timeline.
|
||||
|
||||
Present the assessment. If UNREALISTIC, offer to revise priority tier assignments before writing the index. If CONCERNS, note them and continue.
|
||||
|
||||
### Step 4c: Determine Design Order
|
||||
|
||||
Combine dependency sort + priority tier to produce the final design order:
|
||||
|
|
@ -200,6 +212,12 @@ Ask: "May I write the systems index to `design/gdd/systems-index.md`?"
|
|||
|
||||
Wait for approval. Write the file only after "yes."
|
||||
|
||||
**After the systems index is written, spawn `creative-director` via Task using gate CD-SYSTEMS (`.claude/docs/director-gates.md`).**
|
||||
|
||||
Pass: systems index path, game pillars and core fantasy (from `design/gdd/game-concept.md`), MVP priority tier system list.
|
||||
|
||||
Present the assessment. If REJECT, revise the system set with the user before GDD authoring begins. If CONCERNS, record them in the systems index as a `> **Creative Director Note**` at the top of the relevant tier section.
|
||||
|
||||
### Step 5c: Update Session State
|
||||
|
||||
After writing, create `production/session-state/active.md` if it does not exist, then update it with:
|
||||
|
|
|
|||
|
|
@ -95,6 +95,16 @@ Read all sprint reports for sprints within this milestone from `production/sprin
|
|||
|
||||
---
|
||||
|
||||
## Phase 3b: Producer Risk Assessment
|
||||
|
||||
Before generating the Go/No-Go recommendation, spawn `producer` via Task using gate **PR-MILESTONE** (`.claude/docs/director-gates.md`).
|
||||
|
||||
Pass: milestone name and target date, current completion percentage, blocked story count, velocity data from sprint reports (if available), list of cut candidates.
|
||||
|
||||
Present the producer's assessment inline within the Go/No-Go section. The producer's verdict (ON TRACK / AT RISK / OFF TRACK) informs the overall recommendation — do not issue a GO against an OFF TRACK producer verdict without explicit user acknowledgement.
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Save Review
|
||||
|
||||
Present the review to the user.
|
||||
|
|
|
|||
|
|
@ -107,6 +107,16 @@ Present the categorized list, then route:
|
|||
|
||||
---
|
||||
|
||||
## Phase 3b: Creative Director Player Experience Review
|
||||
|
||||
After categorising findings, spawn `creative-director` via Task using gate **CD-PLAYTEST** (`.claude/docs/director-gates.md`).
|
||||
|
||||
Pass: the structured report content, game pillars and core fantasy (from `design/gdd/game-concept.md`), the specific hypothesis being tested.
|
||||
|
||||
Present the creative director's assessment before saving the report. If CONCERNS or REJECT, add a `## Creative Director Assessment` section to the report capturing the verdict and feedback. If APPROVE, note the approval in the report.
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Save Report
|
||||
|
||||
Ask: "May I write this playtest report to `production/qa/playtests/playtest-[date]-[tester].md`?"
|
||||
|
|
|
|||
|
|
@ -110,21 +110,11 @@ If yes, write the file.
|
|||
|
||||
## Phase 6: Creative Director Review
|
||||
|
||||
Delegate the decision to the creative-director. Spawn a `creative-director` subagent via Task and provide:
|
||||
Spawn `creative-director` via Task using gate **CD-PLAYTEST** (`.claude/docs/director-gates.md`).
|
||||
|
||||
- The full REPORT.md content
|
||||
- The original design question
|
||||
- Any game pillars or concept doc from `design/gdd/` that are relevant
|
||||
Pass: the full REPORT.md content, the original design question, game pillars and core fantasy from `design/gdd/game-concept.md` (if it exists).
|
||||
|
||||
Ask the creative-director to:
|
||||
|
||||
- Evaluate the prototype result against the game's creative vision and pillars
|
||||
- Confirm, modify, or override the prototyper's PROCEED / PIVOT / KILL recommendation
|
||||
- If PROCEED: identify any creative constraints for the production implementation
|
||||
- If PIVOT: specify which direction aligns better with the pillars
|
||||
- If KILL: note whether the underlying player need should be addressed differently
|
||||
|
||||
The creative-director's decision is final. Update the REPORT.md `Recommendation` section with the creative-director's verdict if it differs from the prototyper's.
|
||||
The creative director evaluates the prototype result against the game's creative vision and pillars, then confirms, modifies, or overrides the prototyper's PROCEED / PIVOT / KILL recommendation. Their verdict is final. Update the REPORT.md `Recommendation` section if the creative director's verdict differs from the prototyper's.
|
||||
|
||||
---
|
||||
|
||||
|
|
|
|||
|
|
@ -32,32 +32,84 @@ If no engine is specified, run an interactive engine selection process:
|
|||
> you want to build — it will also recommend an engine. Or tell me about your
|
||||
> game and I can help you pick."
|
||||
|
||||
### If the user wants to pick without a concept, ask:
|
||||
### If the user wants to pick without a concept, ask in this order:
|
||||
|
||||
**Question 1 — Prior experience** (ask this first, always, via `AskUserQuestion`):
|
||||
- Prompt: "Have you worked in any of these engines before?"
|
||||
- Options: `Godot` / `Unity` / `Unreal Engine 5` / `Multiple — I'll explain` / `None of them`
|
||||
- If they pick a specific engine → recommend that engine. Prior experience outweighs all other factors. Confirm with them and skip the matrix.
|
||||
- If "None" or "Multiple" → continue to the questions below.
|
||||
|
||||
**Questions 2-6 — Decision matrix inputs** (only if no prior engine experience):
|
||||
|
||||
**Question 2 — Target platform** (ask this second, always, via `AskUserQuestion` — platform eliminates or heavily weights engines before any other factor):
|
||||
- Prompt: "What platforms are you targeting for this game?"
|
||||
- Options: `PC (Steam / Epic)` / `Mobile (iOS / Android)` / `Console` / `Web / Browser` / `Multiple platforms`
|
||||
- Platform rules that feed directly into the recommendation:
|
||||
- Mobile → Unity strongly preferred; Unreal is a poor fit; Godot is viable for simple mobile
|
||||
- Console → Unity or Unreal; Godot console support requires third-party publishers or significant extra work
|
||||
- Web → Godot exports cleanly to web; Unity WebGL is functional; Unreal has poor web support
|
||||
- PC only → all engines viable; other factors decide
|
||||
- Multiple → Unity is the most portable across PC/mobile/console
|
||||
|
||||
1. **What kind of game?** (2D, 3D, or both?)
|
||||
2. **What platforms?** (PC, mobile, console, web?)
|
||||
3. **Primary input method?** (keyboard/mouse, gamepad, touch, or mixed?)
|
||||
4. **Team size and experience?** (solo beginner, solo experienced, small team?)
|
||||
5. **Any strong language preferences?** (GDScript, C#, C++, visual scripting?)
|
||||
6. **Budget for engine licensing?** (free only, or commercial licenses OK?)
|
||||
2. **Primary input method?** (keyboard/mouse, gamepad, touch, or mixed?)
|
||||
3. **Team size and experience?** (solo beginner, solo experienced, small team?)
|
||||
4. **Any strong language preferences?** (GDScript, C#, C++, visual scripting?)
|
||||
5. **Budget for engine licensing?** (free only, or commercial licenses OK?)
|
||||
|
||||
### Produce a recommendation
|
||||
|
||||
Use this decision matrix:
|
||||
Do NOT use a simple scoring matrix that eliminates engines. Instead, reason through the user's profile against the honest tradeoffs below, then present 1-2 recommendations with full context. Always end with the user choosing — never force a verdict.
|
||||
|
||||
| Factor | Godot 4 | Unity | Unreal Engine 5 |
|
||||
|--------|---------|-------|-----------------|
|
||||
| **Best for** | 2D games, small 3D, solo/small teams | Mobile, mid-scope 3D, cross-platform | AAA 3D, photorealism, large teams |
|
||||
| **Language** | GDScript (+ C#, C++ via extensions) | C# | C++ / Blueprint |
|
||||
| **Cost** | Free, MIT license | Free under revenue threshold | Free under revenue threshold, 5% royalty |
|
||||
| **Learning curve** | Gentle | Moderate | Steep |
|
||||
| **2D support** | Excellent (native) | Good (but 3D-first engine) | Possible but not ideal |
|
||||
| **3D quality ceiling** | Good (improving rapidly) | Very good | Best-in-class |
|
||||
| **Web export** | Yes (native) | Yes (limited) | No |
|
||||
| **Console export** | Via third-party | Yes (with license) | Yes |
|
||||
| **Open source** | Yes | No | Source available |
|
||||
**Engine honest tradeoffs:**
|
||||
|
||||
Present the top 1-2 recommendations with reasoning tied to the user's answers.
|
||||
Let the user choose — never force a recommendation.
|
||||
**Godot 4**
|
||||
- Genuine strengths: 2D (best in class), stylized/indie 3D, rapid iteration, free forever (MIT), open source, gentlest learning curve, best for solo devs who want full control
|
||||
- Real limitations: 3D ecosystem is thin compared to Unity/Unreal (fewer tutorials, assets, community answers for 3D-specific problems); large open-world 3D is very hard and largely untested in Godot; console export requires third-party publishers or significant extra work; smaller professional job market
|
||||
- Licensing reality: Truly free with no revenue thresholds ever. MIT license means you own everything.
|
||||
- Best fit: 2D games of any scope; stylized/atmospheric 3D; contained 3D worlds (not open-world); first game projects where learning curve matters; projects where budget is a hard constraint at any scale
|
||||
|
||||
**Unity**
|
||||
- Genuine strengths: Industry standard for mid-scope 3D and mobile; massive asset store and tutorial ecosystem; C# is a professional language; best console certification support for indie; strong community for almost every genre
|
||||
- Real limitations: Licensing controversy in 2023 damaged trust (runtime fee was proposed then walked back — the risk of policy changes remains real); C# has a steeper initial curve than GDScript; heavier editor than Godot for simple projects
|
||||
- Licensing reality: Free under $200K revenue AND 200K installs (Unity Personal/Plus). Only becomes costly if the game is genuinely successful — most indie games never hit this threshold. The 2023 controversy is worth knowing about but the actual current terms are reasonable for most indie developers.
|
||||
- Best fit: Mobile games; mid-scope 3D; games targeting console; developers with C# background; projects needing large asset store; teams of 2-5
|
||||
|
||||
**Unreal Engine 5**
|
||||
- Genuine strengths: Best-in-class 3D visuals (Lumen, Nanite, Chaos physics); industry standard for AAA and photorealistic 3D; large open-world support is mature and production-tested; Blueprint visual scripting lowers C++ barrier; strong for games targeting high-end PC or console
|
||||
- Real limitations: Steepest learning curve; heaviest editor (slow compile times, large project sizes); overkill for stylized/2D/small-scope games; C++ is genuinely hard; not suitable for mobile or web; 5% royalty past $1M gross revenue
|
||||
- Licensing reality: 5% royalty only applies AFTER $1M gross revenue per title. For a first game or any game that doesn't reach $1M, it costs nothing. This threshold is high enough that most indie developers will never pay it.
|
||||
- Best fit: AAA-quality 3D; large open-world games; photorealistic visuals; developers with C++ experience or willing to use Blueprint; games targeting high-end PC/console where visual fidelity is a core selling point
|
||||
|
||||
**Genre-specific guidance** (factor this into the recommendation):
|
||||
- 2D any style → Godot strongly preferred
|
||||
- 3D stylized / atmospheric / contained world → Godot viable, Unity solid alternative
|
||||
- 3D open world (large, seamless) → Unity or Unreal; Godot is not production-proven for this
|
||||
- 3D photorealistic / AAA-quality → Unreal
|
||||
- Mobile-first → Unity strongly preferred
|
||||
- Console-first → Unity or Unreal; Godot console support requires extra work
|
||||
- Horror / narrative / walking sim → any engine; match to art style and team experience
|
||||
- Action RPG / Soulslike → Unity or Unreal for 3D; community support and assets matter here
|
||||
- Platformer 2D → Godot
|
||||
- Strategy / top-down / RTS → Godot or Unity depending on 2D vs 3D
|
||||
|
||||
**Recommendation format:**
|
||||
1. Show a comparison table with the user's specific factors as rows
|
||||
2. Give a primary recommendation with honest reasoning
|
||||
3. Name the best alternative and when to choose it instead
|
||||
4. Explicitly state: "This is a starting point, not a verdict — you can always migrate engines, and many developers switch between projects."
|
||||
5. Use `AskUserQuestion` to confirm: "Does this recommendation feel right, or would you like to explore a different engine?"
|
||||
- Options: `[Primary engine] (Recommended)` / `[Alternative engine]` / `[Third engine]` / `Explore further` / `Type something`
|
||||
|
||||
**If the user picks "Explore further":**
|
||||
Use `AskUserQuestion` with concept-specific deep-dive topics. Always generate these options from the user's actual concept — do not use generic options. Always include at minimum:
|
||||
- The primary engine's specific limitations for this concept (e.g., "How far can Godot 3D actually go for [genre]?")
|
||||
- The alternative engine's specific tradeoffs for this concept
|
||||
- Language choice impact on this concept's technical challenges
|
||||
- Any concept-specific technical concern (e.g., adaptive audio, open-world streaming, multiplayer netcode)
|
||||
|
||||
The user can select multiple topics. Answer each selected topic in depth before returning to the engine confirmation question.
|
||||
|
||||
---
|
||||
|
||||
|
|
|
|||
|
|
@ -150,18 +150,18 @@ stories that haven't changed, add new stories, remove dropped ones.
|
|||
|
||||
---
|
||||
|
||||
## Phase 4: Scope and Risk Check
|
||||
## Phase 4: Producer Feasibility Gate
|
||||
|
||||
After presenting the sprint plan, add:
|
||||
Before finalising the sprint plan, spawn `producer` via Task using gate **PR-SPRINT** (`.claude/docs/director-gates.md`).
|
||||
|
||||
Pass: proposed story list (titles, estimates, dependencies), total team capacity in hours/days, any carryover from the previous sprint, milestone constraints and deadline.
|
||||
|
||||
Present the producer's assessment. If UNREALISTIC, revise the story selection (defer stories to Should Have or Nice to Have) before asking for write approval. If CONCERNS, surface them and let the user decide whether to adjust.
|
||||
|
||||
After handling the producer's verdict, add:
|
||||
|
||||
> **Scope check:** If this sprint includes stories added beyond the original epic scope, run `/scope-check [epic]` to detect scope creep before implementation begins.
|
||||
|
||||
When reviewing stories during selection, note any stories that appear outside the original epic goals. If any are uncertain, flag them inline: "Are these stories within the original epic scope? If unsure, `/scope-check` can verify."
|
||||
|
||||
For comprehensive sprint planning, consider consulting:
|
||||
- `producer` agent for capacity planning, risk assessment, and cross-department coordination
|
||||
- `game-designer` agent for feature prioritization and design readiness assessment
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Next Steps
|
||||
|
|
|
|||
|
|
@ -32,22 +32,16 @@ Store these findings internally to validate the user's self-assessment and tailo
|
|||
|
||||
## Phase 2: Ask Where the User Is
|
||||
|
||||
This is the first thing the user sees. Present these 4 options clearly:
|
||||
This is the first thing the user sees. Use `AskUserQuestion` with these exact options so the user can click rather than type:
|
||||
|
||||
> **Welcome to Claude Code Game Studios!**
|
||||
>
|
||||
> Before I suggest anything, I'd like to understand where you're starting from.
|
||||
> Where are you at with your game idea right now?
|
||||
>
|
||||
> **A) No idea yet** — I don't have a game concept at all. I want to explore and figure out what to make.
|
||||
>
|
||||
> **B) Vague idea** — I have a rough theme, feeling, or genre in mind (e.g., "something with space" or "a cozy farming game") but nothing concrete.
|
||||
>
|
||||
> **C) Clear concept** — I know the core idea — genre, basic mechanics, maybe a pitch sentence — but haven't formalized it into documents yet.
|
||||
>
|
||||
> **D) Existing work** — I already have design docs, prototypes, code, or significant planning done. I want to organize or continue the work.
|
||||
- **Prompt**: "Welcome to Claude Code Game Studios! Before I suggest anything, I'd like to understand where you're starting from. Where are you at with your game idea right now?"
|
||||
- **Options**:
|
||||
- `A) No idea yet` — I don't have a game concept at all. I want to explore and figure out what to make.
|
||||
- `B) Vague idea` — I have a rough theme, feeling, or genre in mind (e.g., "something with space" or "a cozy farming game") but nothing concrete.
|
||||
- `C) Clear concept` — I know the core idea — genre, basic mechanics, maybe a pitch sentence — but haven't formalized it into documents yet.
|
||||
- `D) Existing work` — I already have design docs, prototypes, code, or significant planning done. I want to organize or continue the work.
|
||||
|
||||
Wait for the user's answer. Do not proceed until they respond.
|
||||
Wait for the user's selection. Do not proceed until they respond.
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -58,11 +52,11 @@ Wait for the user's answer. Do not proceed until they respond.
|
|||
The user needs creative exploration before anything else.
|
||||
|
||||
1. Acknowledge that starting from zero is completely fine
|
||||
2. Briefly explain what `/brainstorm` does (guided ideation using professional frameworks — MDA, player psychology, verb-first design)
|
||||
3. Recommend running `/brainstorm open` as the next step
|
||||
2. Briefly explain what `/brainstorm` does (guided ideation using professional frameworks — MDA, player psychology, verb-first design). Mention that it has two modes: `/brainstorm open` for fully open exploration, or `/brainstorm [hint]` if they have even a vague theme (e.g., "space", "cozy", "horror").
|
||||
3. Recommend running `/brainstorm open` as the next step, but invite them to use a hint if something comes to mind
|
||||
4. Show the recommended path:
|
||||
**Concept phase:**
|
||||
- `/brainstorm` — discover your game concept
|
||||
- `/brainstorm open` — discover your game concept
|
||||
- `/setup-engine` — configure the engine (brainstorm will recommend one)
|
||||
- `/map-systems` — decompose the concept into systems
|
||||
- `/design-system` — author a GDD for each MVP system
|
||||
|
|
@ -101,13 +95,12 @@ The user needs creative exploration before anything else.
|
|||
|
||||
#### If C: Clear concept
|
||||
|
||||
1. Ask 2-3 follow-up questions:
|
||||
- What's the genre and core mechanic? (one sentence)
|
||||
- Do they have an engine preference, or need help choosing?
|
||||
- What's the rough scope? (jam game, small project, large project)
|
||||
2. Offer two paths:
|
||||
- **Formalize first**: Run `/brainstorm` to structure the concept into a proper game concept document
|
||||
- **Jump to engine setup**: Go straight to `/setup-engine` and write the GDD manually afterward
|
||||
1. Ask them to describe their concept in one sentence — genre and core mechanic. Use plain text, not AskUserQuestion (it's an open response).
|
||||
2. Acknowledge the concept, then use `AskUserQuestion` to offer two paths:
|
||||
- **Prompt**: "How would you like to proceed?"
|
||||
- **Options**:
|
||||
- `Formalize it first` — Run `/brainstorm [concept]` to structure it into a proper game concept document
|
||||
- `Jump straight in` — Go to `/setup-engine` now and write the GDD manually afterward
|
||||
3. Show the recommended path:
|
||||
**Concept phase:**
|
||||
- `/brainstorm` or `/setup-engine` (their pick)
|
||||
|
|
@ -154,15 +147,18 @@ The user needs creative exploration before anything else.
|
|||
|
||||
## Phase 4: Confirm Before Proceeding
|
||||
|
||||
After presenting the recommended path, ask the user which step they'd like to take first. Never auto-run the next skill.
|
||||
After presenting the recommended path, use `AskUserQuestion` to ask the user which step they'd like to take first. Never auto-run the next skill.
|
||||
|
||||
> "Would you like to start with [recommended first step], or would you prefer to do something else first?"
|
||||
- **Prompt**: "Would you like to start with [recommended first step]?"
|
||||
- **Options**:
|
||||
- `Yes, let's start with [recommended first step]`
|
||||
- `I'd like to do something else first`
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Hand Off
|
||||
|
||||
When the user chooses their next step, let them invoke the skill themselves or offer to run it for them. The `/start` skill's job is done once the user has a clear next action.
|
||||
When the user confirms their next step, respond with a single short line: "Type `[skill command]` to begin." Nothing else. Do not re-explain the skill or add encouragement. The `/start` skill's job is done.
|
||||
|
||||
Verdict: **COMPLETE** — user oriented and handed off to next step.
|
||||
|
||||
|
|
|
|||
|
|
@ -214,24 +214,17 @@ For each deviation found, categorize:
|
|||
|
||||
---
|
||||
|
||||
## Phase 5: Code Review Prompt
|
||||
## Phase 5: Lead Programmer Code Review Gate
|
||||
|
||||
After criteria verification and deviation check, use `AskUserQuestion`:
|
||||
Spawn `lead-programmer` via Task using gate **LP-CODE-REVIEW** (`.claude/docs/director-gates.md`).
|
||||
|
||||
```
|
||||
question: "Implementation verified. Run /code-review on the changed files?"
|
||||
options:
|
||||
- "Yes — run /code-review now"
|
||||
- "Skip — I'll review manually"
|
||||
- "Skip — already reviewed"
|
||||
```
|
||||
Pass: implementation file paths, story file path, relevant GDD section, governing ADR.
|
||||
|
||||
If "Yes": list the files to review and say:
|
||||
"Run `/code-review [file1] [file2]` to review the implementation before
|
||||
marking complete."
|
||||
Present the verdict to the user. If CONCERNS, surface them via `AskUserQuestion`:
|
||||
- Options: `Revise flagged issues` / `Accept and proceed` / `Discuss further`
|
||||
If REJECT, do not proceed to Phase 6 verdict until the issues are resolved.
|
||||
|
||||
Do not run code-review inline — surface it and let the developer decide when
|
||||
to invoke it.
|
||||
If the story has no implementation files yet (verdict is being run before coding is done), skip this phase and note: "LP-CODE-REVIEW skipped — no implementation files found. Run after implementation is complete."
|
||||
|
||||
---
|
||||
|
||||
|
|
|
|||
Loading…
Reference in a new issue