mirror of
https://github.com/coleam00/Archon
synced 2026-04-21 13:37:41 +00:00
SQL, documentation, and Docker touch ups
This commit is contained in:
parent
0854f1c3f0
commit
9c58a2ecdf
10 changed files with 1171 additions and 56 deletions
309
.agents/commands/execute-github.md
Normal file
309
.agents/commands/execute-github.md
Normal file
|
|
@ -0,0 +1,309 @@
|
|||
---
|
||||
description: Execute an implementation plan in GitHub workflow
|
||||
argument-hint: [path-to-plan] [feature-branch]
|
||||
---
|
||||
|
||||
# Execute: Implement from Plan (GitHub Workflow)
|
||||
|
||||
## Arguments
|
||||
|
||||
- **Plan Path** (`$1`): Path to the implementation plan file (e.g., `.agents/plans/add-user-auth.md`)
|
||||
- **Feature Branch** (`$2`): Name of the feature branch to work on (e.g., `feature-add-user-auth`)
|
||||
|
||||
## Plan to Execute
|
||||
|
||||
Read plan file: `$1`
|
||||
|
||||
## Feature Branch
|
||||
|
||||
Checkout and work on branch: `$2`
|
||||
|
||||
## Execution Instructions
|
||||
|
||||
### 0. Setup: Checkout Feature Branch
|
||||
|
||||
Before starting implementation, ensure you're on the correct feature branch:
|
||||
|
||||
```bash
|
||||
# Fetch latest changes from remote
|
||||
git fetch origin
|
||||
|
||||
# Checkout the feature branch
|
||||
git checkout $2
|
||||
|
||||
# Pull latest changes from the feature branch
|
||||
git pull origin $2
|
||||
```
|
||||
|
||||
**Verify you're on the correct branch:**
|
||||
```bash
|
||||
git branch --show-current
|
||||
# Should output: $2
|
||||
```
|
||||
|
||||
### 1. Read and Understand
|
||||
|
||||
- Read the ENTIRE plan carefully from `$1`
|
||||
- Understand all tasks and their dependencies
|
||||
- Note the validation commands to run
|
||||
- Review the testing strategy
|
||||
- Understand acceptance criteria
|
||||
|
||||
### 2. Execute Tasks in Order
|
||||
|
||||
For EACH task in "Step by Step Tasks":
|
||||
|
||||
#### a. Navigate to the task
|
||||
- Identify the file and action required
|
||||
- Read existing related files if modifying
|
||||
|
||||
#### b. Implement the task
|
||||
- Follow the detailed specifications exactly
|
||||
- Maintain consistency with existing code patterns
|
||||
- Include proper type hints and documentation
|
||||
- Add structured logging where appropriate
|
||||
|
||||
#### c. Verify as you go
|
||||
- After each file change, check syntax
|
||||
- Ensure imports are correct
|
||||
- Verify types are properly defined
|
||||
|
||||
#### d. Commit incrementally
|
||||
- Make small, focused commits as you complete tasks
|
||||
- Use descriptive commit messages
|
||||
- Example: `feat: implement streaming response handler`
|
||||
|
||||
### 3. Implement Testing Strategy
|
||||
|
||||
After completing implementation tasks:
|
||||
|
||||
**Recommended Approach:** Write failing tests first for complex logic (especially path handling, type conversions). This provides faster feedback than implementing then testing.
|
||||
|
||||
- Create all test files specified in the plan
|
||||
- Implement all test cases mentioned
|
||||
- Follow the testing approach outlined
|
||||
- Ensure tests cover edge cases
|
||||
|
||||
### 4. Run Validation Commands
|
||||
|
||||
Execute ALL validation commands from the plan in order:
|
||||
|
||||
```bash
|
||||
# Run each command exactly as specified in plan
|
||||
```
|
||||
|
||||
If any command fails:
|
||||
- Fix the issue
|
||||
- Re-run the command
|
||||
- Continue only when it passes
|
||||
|
||||
### 5. Final Verification
|
||||
|
||||
Before creating pull request:
|
||||
|
||||
- ✅ All tasks from plan completed
|
||||
- ✅ All tests created and passing
|
||||
- ✅ All validation commands pass
|
||||
- ✅ Code follows project conventions
|
||||
- ✅ Documentation added/updated as needed
|
||||
- ✅ All changes committed to feature branch
|
||||
|
||||
### 6. Create Pull Request to Staging
|
||||
|
||||
Once all validation passes, create a pull request to the **staging** branch:
|
||||
|
||||
```bash
|
||||
# Push all commits to the feature branch
|
||||
git push origin $2
|
||||
|
||||
# Create PR to staging branch (NOT main)
|
||||
gh pr create \
|
||||
--base staging \
|
||||
--head $2 \
|
||||
--title "Feature: <descriptive-title>" \
|
||||
--body "$(cat <<EOF
|
||||
## Summary
|
||||
<Brief description of what this PR implements>
|
||||
|
||||
## Implementation Plan
|
||||
Implemented from plan: \`$1\`
|
||||
|
||||
## Changes
|
||||
- <List major changes>
|
||||
- <Include files created/modified>
|
||||
|
||||
## Testing
|
||||
- ✅ All unit tests passing
|
||||
- ✅ All integration tests passing
|
||||
- ✅ All validation commands passed
|
||||
|
||||
## Validation Results
|
||||
\`\`\`bash
|
||||
# Output from validation commands
|
||||
<Include key validation results>
|
||||
\`\`\`
|
||||
|
||||
## Acceptance Criteria
|
||||
<List acceptance criteria from plan with checkboxes>
|
||||
- [ ] Criterion 1
|
||||
- [ ] Criterion 2
|
||||
|
||||
## Ready for Review
|
||||
All implementation tasks completed and validated. Ready for staging deployment and testing.
|
||||
EOF
|
||||
)"
|
||||
```
|
||||
|
||||
**Important Notes:**
|
||||
- PRs target **staging** branch, NOT main
|
||||
- Staging branch is used for testing before production merge
|
||||
- Use descriptive PR title that clearly indicates the feature
|
||||
- Include comprehensive PR description with testing results
|
||||
|
||||
### 7. Capture PR Information
|
||||
|
||||
After creating the PR, capture the PR URL:
|
||||
|
||||
```bash
|
||||
# Get the PR URL for the feature branch
|
||||
gh pr view $2 --json url --jq .url
|
||||
```
|
||||
|
||||
## Output Report
|
||||
|
||||
Provide a comprehensive summary that will be automatically posted as a GitHub comment (you don't have to do this yourself):
|
||||
|
||||
```markdown
|
||||
## ✅ Implementation Complete
|
||||
|
||||
**Feature Branch:** `$2`
|
||||
**Implementation Plan:** `$1`
|
||||
**Pull Request:** <PR-URL>
|
||||
|
||||
### Summary
|
||||
<Brief 2-3 sentence summary of what was implemented>
|
||||
|
||||
### Completed Tasks
|
||||
<Summarize major tasks completed>
|
||||
|
||||
#### Files Created
|
||||
- `path/to/new_file1.py` - <Purpose>
|
||||
- `path/to/new_file2.py` - <Purpose>
|
||||
- `tests/path/to/test_file.py` - <Test coverage>
|
||||
|
||||
#### Files Modified
|
||||
- `path/to/modified_file1.py` - <Changes made>
|
||||
- `path/to/modified_file2.py` - <Changes made>
|
||||
|
||||
### Tests Added
|
||||
**Test Files Created:**
|
||||
- `tests/path/to/test_suite.py` - <Number> test cases
|
||||
|
||||
**Test Coverage:**
|
||||
- Unit tests: ✅ All passing
|
||||
- Integration tests: ✅ All passing
|
||||
- Edge cases: ✅ Covered
|
||||
|
||||
### Validation Results
|
||||
```bash
|
||||
# Linting
|
||||
<Output from linting commands>
|
||||
|
||||
# Type Checking
|
||||
<Output from type checking>
|
||||
|
||||
# Test Suite
|
||||
<Output from test runs with pass/fail counts>
|
||||
```
|
||||
|
||||
**All Validation:** ✅ Passed
|
||||
|
||||
### Acceptance Criteria
|
||||
<List each criterion from plan with ✅ or ❌>
|
||||
- ✅ Criterion 1 - Met
|
||||
- ✅ Criterion 2 - Met
|
||||
- ✅ All validation commands passed
|
||||
- ✅ Tests provide adequate coverage
|
||||
- ✅ Code follows project conventions
|
||||
|
||||
### Pull Request Details
|
||||
- **Target Branch:** `staging`
|
||||
- **Status:** Open and ready for review
|
||||
- **Link:** <PR-URL>
|
||||
|
||||
### Deployment Notes
|
||||
<Any important notes for staging deployment or testing>
|
||||
|
||||
### Next Steps
|
||||
1. Review the pull request: <PR-URL>
|
||||
2. Test in staging environment
|
||||
3. If staging tests pass, merge to staging
|
||||
4. After staging validation, create PR from staging to main for production deployment
|
||||
|
||||
---
|
||||
|
||||
**Implementation Status:** ✅ Complete
|
||||
**Branch:** `$2`
|
||||
**PR:** <PR-URL>
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
If you encounter issues during execution:
|
||||
|
||||
### Plan Deviations
|
||||
- Document any deviations from the plan
|
||||
- Explain why deviation was necessary
|
||||
- Update the implementation approach accordingly
|
||||
|
||||
### Validation Failures
|
||||
- Never skip validation steps
|
||||
- Fix all failures before creating PR
|
||||
- Document any persistent issues in PR description
|
||||
|
||||
### Unexpected Complexity
|
||||
- If tasks are more complex than planned, break them down further
|
||||
- Add additional commits with clear messages
|
||||
- Document complexity issues in final report
|
||||
|
||||
### Missing Information
|
||||
- If plan lacks necessary details, research and document
|
||||
- Add findings to implementation notes
|
||||
- Consider creating research report for future reference
|
||||
|
||||
## Notes
|
||||
|
||||
- Always work on the specified feature branch (`$2`)
|
||||
- All PRs target **staging** branch, not main
|
||||
- Commit frequently with descriptive messages
|
||||
- Run validation commands before creating PR
|
||||
- Include comprehensive testing in PR description
|
||||
- Document any deviations from the plan
|
||||
- Feature branches follow naming convention: `feature-<descriptive-name>`
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before marking as complete:
|
||||
|
||||
- [ ] All tasks from plan implemented
|
||||
- [ ] All tests passing
|
||||
- [ ] All validation commands successful
|
||||
- [ ] Code follows project patterns and conventions
|
||||
- [ ] Proper error handling implemented
|
||||
- [ ] Documentation updated (if applicable)
|
||||
- [ ] PR created with comprehensive description
|
||||
- [ ] PR targets staging branch
|
||||
- [ ] All commits have clear messages
|
||||
- [ ] No debugging code or console.logs left behind
|
||||
- [ ] Performance considerations addressed
|
||||
- [ ] Security best practices followed
|
||||
|
||||
## Success Criteria
|
||||
|
||||
**Implementation Success**: All tasks completed, all tests passing, all validation commands successful
|
||||
|
||||
**PR Quality**: Comprehensive description, clear testing results, ready for review
|
||||
|
||||
**GitHub Integration**: PR created to staging, branch naming follows convention, proper commit history
|
||||
|
||||
**Documentation**: Final report includes all required sections with accurate information
|
||||
596
.agents/commands/plan-feature-github.md
Normal file
596
.agents/commands/plan-feature-github.md
Normal file
|
|
@ -0,0 +1,596 @@
|
|||
---
|
||||
description: "Create comprehensive feature plan with GitHub workflow integration"
|
||||
---
|
||||
|
||||
# Plan a new task (GitHub Workflow)
|
||||
|
||||
## Feature: $ARGUMENTS
|
||||
|
||||
## Mission
|
||||
|
||||
Transform a feature request into a **comprehensive implementation plan** through systematic codebase analysis, external research, and strategic planning. This plan will be committed to a feature branch and used for GitHub-based implementation workflow.
|
||||
|
||||
**Core Principle**: We do NOT write code in this phase. Our goal is to create a context-rich implementation plan that enables one-pass implementation success for AI agents working in GitHub workflows.
|
||||
|
||||
**Key Philosophy**: Context is King. The plan must contain ALL information needed for implementation - patterns, mandatory reading, documentation, validation commands - so the execution agent succeeds on the first attempt.
|
||||
|
||||
**HARD CONSTRAINT**: The final plan MUST be between 500-700 lines total. Be concise while comprehensive. Reference patterns instead of repeating them. Group related tasks. Remove redundancy.
|
||||
|
||||
## GitHub Workflow Integration
|
||||
|
||||
This command creates a feature branch and commits the plan to it, preparing for GitHub-native implementation workflow.
|
||||
|
||||
**Branch Naming**: Feature branches follow the pattern `feature-<descriptive-name>` (e.g., `feature-add-user-auth`, `feature-streaming-api`)
|
||||
|
||||
**Plan Location**: Plans are committed to `.agents/plans/{kebab-case-name}.md` within the feature branch
|
||||
|
||||
**GitHub Context**: You have access to GitHub CLI (`gh`) for and can use commands like:
|
||||
- `gh issue view <number>` - View issue details
|
||||
- `gh pr view <number>` - View pull request details
|
||||
- `gh repo view` - View repository information
|
||||
|
||||
## Planning Process
|
||||
|
||||
### Phase 1: Feature Understanding
|
||||
|
||||
**Deep Feature Analysis:**
|
||||
|
||||
- Extract the core problem being solved
|
||||
- Identify user value and business impact
|
||||
- Determine feature type: New Capability/Enhancement/Refactor/Bug Fix
|
||||
- Assess complexity: Low/Medium/High
|
||||
- Map affected systems and components
|
||||
|
||||
**Create User Story Format Or Refine If Story Was Provided By The User:**
|
||||
|
||||
```
|
||||
As a <type of user>
|
||||
I want to <action/goal>
|
||||
So that <benefit/value>
|
||||
```
|
||||
|
||||
### Phase 2: Codebase Intelligence Gathering
|
||||
|
||||
**Use specialized agents and parallel analysis:**
|
||||
|
||||
**1. Project Structure Analysis**
|
||||
|
||||
- Detect primary language(s), frameworks, and runtime versions
|
||||
- Map directory structure and architectural patterns
|
||||
- Identify service/component boundaries and integration points
|
||||
- Locate configuration files (pyproject.toml, package.json, etc.)
|
||||
- Find environment setup and build processes
|
||||
|
||||
**2. Pattern Recognition** (Use specialized subagents when beneficial)
|
||||
|
||||
- Search for similar implementations in codebase
|
||||
- Identify coding conventions:
|
||||
- Naming patterns (CamelCase, snake_case, kebab-case)
|
||||
- File organization and module structure
|
||||
- Error handling approaches
|
||||
- Logging patterns and standards
|
||||
- Extract common patterns for the feature's domain
|
||||
- Document anti-patterns to avoid
|
||||
- Check CLAUDE.md for project-specific rules and conventions
|
||||
|
||||
**3. Dependency Analysis**
|
||||
|
||||
- Catalog external libraries relevant to feature
|
||||
- Understand how libraries are integrated (check imports, configs)
|
||||
- Find relevant documentation in docs/, ai_docs/, .agents/reference or ai-wiki if available
|
||||
- Note library versions and compatibility requirements
|
||||
|
||||
**4. Testing Patterns**
|
||||
|
||||
- Identify test framework and structure (pytest, jest, etc.)
|
||||
- Find similar test examples for reference
|
||||
- Understand test organization (unit vs integration)
|
||||
- Note coverage requirements and testing standards
|
||||
|
||||
**5. Integration Points**
|
||||
|
||||
- Identify existing files that need updates
|
||||
- Determine new files that need creation and their locations
|
||||
- Map router/API registration patterns
|
||||
- Understand database/model patterns if applicable
|
||||
- Identify authentication/authorization patterns if relevant
|
||||
|
||||
**Clarify Ambiguities:**
|
||||
|
||||
- If requirements are unclear at this point, ask the user to clarify before you continue
|
||||
- Get specific implementation preferences (libraries, approaches, patterns)
|
||||
- Resolve architectural decisions before proceeding
|
||||
|
||||
### Phase 3: External Research & Documentation
|
||||
|
||||
**Use specialized subagents when beneficial for external research:**
|
||||
|
||||
**Research Report Validation (CRITICAL FIRST STEP):**
|
||||
|
||||
Before conducting new research, validate existing research reports:
|
||||
|
||||
- Check `.agents/report/` for relevant research documents
|
||||
- **Read each report thoroughly** - don't just skim
|
||||
- **Validate completeness** - does it answer ALL implementation questions?
|
||||
- Are ALL mentioned components/patterns actually explained with code examples?
|
||||
- Does it cover edge cases and error handling?
|
||||
- Are there references to concepts without full implementation details?
|
||||
- **Identify gaps** - what's mentioned but not fully explained?
|
||||
- **Fill gaps immediately** - research missing details before proceeding
|
||||
- Document which reports were validated and any gaps found
|
||||
|
||||
**Example Gap Analysis:**
|
||||
```markdown
|
||||
Report: research-report-streaming.md
|
||||
✓ Covers: Basic streaming pattern
|
||||
✗ Gap Found: Mentions CallToolsNode but no handling code
|
||||
✗ Gap Found: Says "first chunk includes role" but no empty chunk requirement
|
||||
→ Action: Research OpenAI SSE spec for first chunk requirements
|
||||
→ Action: Research Pydantic AI CallToolsNode attributes and usage
|
||||
```
|
||||
|
||||
**Documentation Gathering:**
|
||||
|
||||
- Research latest library versions and best practices
|
||||
- Find official documentation with specific section anchors
|
||||
- Locate implementation examples and tutorials
|
||||
- Identify common gotchas and known issues
|
||||
- Check for breaking changes and migration guides
|
||||
|
||||
**Technology Trends:**
|
||||
|
||||
- Research current best practices for the technology stack
|
||||
- Find relevant blog posts, guides, or case studies
|
||||
- Identify performance optimization patterns
|
||||
- Document security considerations
|
||||
|
||||
**Compile Research References:**
|
||||
|
||||
```markdown
|
||||
## Relevant Documentation
|
||||
|
||||
- [Library Official Docs](https://example.com/docs#section)
|
||||
- Specific feature implementation guide
|
||||
- Why: Needed for X functionality
|
||||
- [Framework Guide](https://example.com/guide#integration)
|
||||
- Integration patterns section
|
||||
- Why: Shows how to connect components
|
||||
```
|
||||
|
||||
**External Package API Verification (CRITICAL for new dependencies):**
|
||||
|
||||
When the feature requires adding a new external Python package:
|
||||
|
||||
1. **Verify Package Name vs Import Name**
|
||||
- PyPI package name often differs from Python import name
|
||||
- Example: `brave-search-python-client` (package) → `brave_search_python_client` (import)
|
||||
- NEVER assume they're identical - always verify
|
||||
|
||||
2. **Test Actual API Before Planning**
|
||||
```bash
|
||||
# Install package
|
||||
uv add <package-name>
|
||||
|
||||
# Test import and inspect API
|
||||
uv run python -c "from package_name import ClassName; help(ClassName)" | head -50
|
||||
```
|
||||
|
||||
3. **Document Verified API in Plan**
|
||||
- Correct import statements with actual class/function names
|
||||
- Actual method signatures (sync vs async, parameters)
|
||||
- Required request/response objects
|
||||
- Include code examples from package documentation
|
||||
|
||||
4. **Common API Verification Mistakes to Avoid**
|
||||
- ❌ Assuming class name from package name (e.g., `BraveSearchClient` vs actual `BraveSearch`)
|
||||
- ❌ Guessing method names (e.g., `.search()` vs actual `.web()`)
|
||||
- ❌ Missing required request objects (e.g., `WebSearchRequest`)
|
||||
- ❌ Wrong sync/async usage (e.g., sync when package is async-only)
|
||||
|
||||
**Example Research Entry with API Verification:**
|
||||
```markdown
|
||||
### brave-search-python-client API
|
||||
|
||||
**Verified Import & API:**
|
||||
```python
|
||||
# ✓ Verified via: uv run python -c "from brave_search_python_client import BraveSearch; help(BraveSearch.web)"
|
||||
from brave_search_python_client import BraveSearch, WebSearchRequest
|
||||
|
||||
# Class: BraveSearch (NOT BraveSearchClient)
|
||||
# Method: async web(request: WebSearchRequest) - NOT search()
|
||||
# Requires: WebSearchRequest object (NOT direct parameters)
|
||||
```
|
||||
|
||||
**Documentation:**
|
||||
- [Official API Docs](https://brave-search-python-client.readthedocs.io/)
|
||||
- [GitHub Examples](https://github.com/helmut-hoffer-von-ankershoffen/brave-search-python-client/tree/main/examples)
|
||||
|
||||
**Why This Matters:** Prevents ModuleNotFoundError and AttributeError during implementation.
|
||||
```
|
||||
|
||||
### Phase 4: Deep Strategic Thinking
|
||||
|
||||
**Think Harder About:**
|
||||
|
||||
- How does this feature fit into the existing architecture?
|
||||
- What are the critical dependencies and order of operations?
|
||||
- What could go wrong? (Edge cases, race conditions, errors)
|
||||
- How will this be tested comprehensively?
|
||||
- What performance implications exist?
|
||||
- Are there security considerations?
|
||||
- How maintainable is this approach?
|
||||
|
||||
**Design Decisions:**
|
||||
|
||||
- Choose between alternative approaches with clear rationale
|
||||
- Design for extensibility and future modifications
|
||||
- Plan for backward compatibility if needed
|
||||
- Consider scalability implications
|
||||
|
||||
### Phase 5: Create Feature Branch & Commit Plan
|
||||
|
||||
**1. Create Feature Branch:**
|
||||
|
||||
```bash
|
||||
# Generate descriptive branch name (e.g., feature-add-streaming-api)
|
||||
git checkout -b feature-<descriptive-name>
|
||||
```
|
||||
|
||||
**Branch name should be:**
|
||||
- Lowercase with hyphens
|
||||
- Descriptive and concise (3-5 words max)
|
||||
- Clearly indicate the feature (e.g., `feature-user-auth`, `feature-rag-pipeline`, `feature-streaming-response`)
|
||||
|
||||
**2. Generate Plan Structure:**
|
||||
|
||||
Create comprehensive plan with the following structure template:
|
||||
|
||||
```markdown
|
||||
# Feature: <feature-name>
|
||||
|
||||
The following plan should be complete, but its important that you validate documentation and codebase patterns and task sanity before you start implementing.
|
||||
|
||||
Pay special attention to naming of existing utils types and models. Import from the right files etc.
|
||||
|
||||
## Feature Description
|
||||
|
||||
<Detailed description of the feature, its purpose, and value to users>
|
||||
|
||||
## User Story
|
||||
|
||||
As a <type of user>
|
||||
I want to <action/goal>
|
||||
So that <benefit/value>
|
||||
|
||||
## Problem Statement
|
||||
|
||||
<Clearly define the specific problem or opportunity this feature addresses>
|
||||
|
||||
## Solution Statement
|
||||
|
||||
<Describe the proposed solution approach and how it solves the problem>
|
||||
|
||||
## Feature Metadata
|
||||
|
||||
**Feature Type**: [New Capability/Enhancement/Refactor/Bug Fix]
|
||||
**Estimated Complexity**: [Low/Medium/High]
|
||||
**Primary Systems Affected**: [List of main components/services]
|
||||
**Dependencies**: [External libraries or services required]
|
||||
|
||||
---
|
||||
|
||||
## CONTEXT REFERENCES
|
||||
|
||||
### Relevant Codebase Files IMPORTANT: YOU MUST READ THESE FILES BEFORE IMPLEMENTING!
|
||||
|
||||
<List files with line numbers and relevance>
|
||||
|
||||
- `path/to/file.py` (lines 15-45) - Why: Contains pattern for X that we'll mirror
|
||||
- `path/to/model.py` (lines 100-120) - Why: Database model structure to follow
|
||||
- `path/to/test.py` - Why: Test pattern example
|
||||
|
||||
### New Files to Create
|
||||
|
||||
- `path/to/new_service.py` - Service implementation for X functionality
|
||||
- `path/to/new_model.py` - Data model for Y resource
|
||||
- `tests/path/to/test_new_service.py` - Unit tests for new service
|
||||
|
||||
### Relevant Documentation YOU SHOULD READ THESE BEFORE IMPLEMENTING!
|
||||
|
||||
- [Documentation Link 1](https://example.com/doc1#section)
|
||||
- Specific section: Authentication setup
|
||||
- Why: Required for implementing secure endpoints
|
||||
- [Documentation Link 2](https://example.com/doc2#integration)
|
||||
- Specific section: Database integration
|
||||
- Why: Shows proper async database patterns
|
||||
|
||||
### Patterns to Follow
|
||||
|
||||
<Specific patterns extracted from codebase - include actual code examples from the project>
|
||||
|
||||
**Naming Conventions:** (for example)
|
||||
|
||||
**Error Handling:** (for example)
|
||||
|
||||
**Logging Pattern:** (for example)
|
||||
|
||||
**Other Relevant Patterns:** (for example)
|
||||
|
||||
---
|
||||
|
||||
## IMPLEMENTATION PLAN
|
||||
|
||||
### Phase 1: Foundation
|
||||
|
||||
<Describe foundational work needed before main implementation>
|
||||
|
||||
**Tasks:**
|
||||
|
||||
- Set up base structures (schemas, types, interfaces)
|
||||
- Configure necessary dependencies
|
||||
- Create foundational utilities or helpers
|
||||
|
||||
### Phase 2: Core Implementation
|
||||
|
||||
<Describe the main implementation work>
|
||||
|
||||
**Tasks:**
|
||||
|
||||
- Implement core business logic
|
||||
- Create service layer components
|
||||
- Add API endpoints or interfaces
|
||||
- Implement data models
|
||||
|
||||
### Phase 3: Integration
|
||||
|
||||
<Describe how feature integrates with existing functionality>
|
||||
|
||||
**Tasks:**
|
||||
|
||||
- Connect to existing routers/handlers
|
||||
- Register new components ⚠️ **CRITICAL: Preserve import order for side-effect imports** (use `# ruff: noqa: I001`)
|
||||
- Update configuration files
|
||||
- Add middleware or interceptors if needed
|
||||
|
||||
### Phase 4: Testing & Validation
|
||||
|
||||
<Describe testing approach>
|
||||
|
||||
**Tasks:**
|
||||
|
||||
- Implement unit tests for each component
|
||||
- Create integration tests for feature workflow
|
||||
- **Pattern:** Test service layer functions directly (NOT tool registration with RunContext)
|
||||
- **Example:** `await service.execute_function(vault_manager, params...)`
|
||||
- Add edge case tests
|
||||
- Validate against acceptance criteria
|
||||
|
||||
---
|
||||
|
||||
## STEP-BY-STEP TASKS
|
||||
|
||||
IMPORTANT: Execute every task in order, top to bottom. Each task is atomic and independently testable.
|
||||
|
||||
### Task Format Guidelines
|
||||
|
||||
Use information-dense keywords for clarity:
|
||||
|
||||
- **CREATE**: New files or components
|
||||
- **UPDATE**: Modify existing files
|
||||
- **ADD**: Insert new functionality into existing code
|
||||
- **REMOVE**: Delete deprecated code
|
||||
- **REFACTOR**: Restructure without changing behavior
|
||||
- **MIRROR**: Copy pattern from elsewhere in codebase
|
||||
|
||||
### {ACTION} {target_file}
|
||||
|
||||
- **IMPLEMENT**: {Specific implementation detail}
|
||||
- **PATTERN**: {Reference to existing pattern - file:line}
|
||||
- **IMPORTS**: {Required imports and dependencies}
|
||||
- **GOTCHA**: {Known issues or constraints to avoid}
|
||||
- **VALIDATE**: `{executable validation command}`
|
||||
|
||||
<Continue with all tasks in dependency order...>
|
||||
|
||||
---
|
||||
|
||||
## TESTING STRATEGY
|
||||
|
||||
<Define testing approach based on project's test framework and patterns discovered in during research>
|
||||
|
||||
### Unit Tests
|
||||
|
||||
<Scope and requirements based on project standards>
|
||||
|
||||
Design unit tests with fixtures and assertions following existing testing approaches
|
||||
|
||||
### Integration Tests
|
||||
|
||||
<Scope and requirements based on project standards>
|
||||
|
||||
### Edge Cases
|
||||
|
||||
<List specific edge cases that must be tested for this feature>
|
||||
|
||||
---
|
||||
|
||||
## VALIDATION COMMANDS
|
||||
|
||||
<Define validation commands based on project's tools discovered in Phase 2>
|
||||
|
||||
Execute every command to ensure zero regressions and 100% feature correctness.
|
||||
|
||||
### Level 1: Import Validation (CRITICAL)
|
||||
|
||||
**Verify all imports resolve before running tests:**
|
||||
|
||||
```bash
|
||||
uv run python -c "from app.main import app; print('✓ All imports valid')"
|
||||
```
|
||||
|
||||
**Expected:** "✓ All imports valid" (no ModuleNotFoundError or ImportError)
|
||||
|
||||
**Why:** Catches incorrect package imports immediately. If this fails, fix imports before proceeding.
|
||||
|
||||
### Level 2: Syntax & Style
|
||||
|
||||
<Project-specific linting and formatting commands>
|
||||
|
||||
### Level 3: Unit Tests
|
||||
|
||||
<Project-specific unit test commands>
|
||||
|
||||
### Level 4: Integration Tests
|
||||
|
||||
<Project-specific integration test commands>
|
||||
|
||||
### Level 5: Manual Validation
|
||||
|
||||
<Feature-specific manual testing steps - API calls, UI testing, etc.>
|
||||
|
||||
### Level 6: Additional Validation (Optional)
|
||||
|
||||
<MCP servers or additional CLI tools if available>
|
||||
|
||||
---
|
||||
|
||||
## ACCEPTANCE CRITERIA
|
||||
|
||||
<List specific, measurable criteria that must be met for completion>
|
||||
|
||||
- [ ] Feature implements all specified functionality
|
||||
- [ ] All validation commands pass with zero errors
|
||||
- [ ] Unit test coverage meets requirements (80%+)
|
||||
- [ ] Integration tests verify end-to-end workflows
|
||||
- [ ] Code follows project conventions and patterns
|
||||
- [ ] No regressions in existing functionality
|
||||
- [ ] Documentation is updated (if applicable)
|
||||
- [ ] Performance meets requirements (if applicable)
|
||||
- [ ] Security considerations addressed (if applicable)
|
||||
|
||||
---
|
||||
|
||||
## COMPLETION CHECKLIST
|
||||
|
||||
- [ ] All tasks completed in order
|
||||
- [ ] Each task validation passed immediately
|
||||
- [ ] All validation commands executed successfully
|
||||
- [ ] Full test suite passes (unit + integration)
|
||||
- [ ] No linting or type checking errors
|
||||
- [ ] Manual testing confirms feature works
|
||||
- [ ] Acceptance criteria all met
|
||||
- [ ] Code reviewed for quality and maintainability
|
||||
|
||||
---
|
||||
|
||||
## NOTES
|
||||
|
||||
<Additional context, design decisions, trade-offs>
|
||||
```
|
||||
|
||||
**3. Commit Plan to Feature Branch:**
|
||||
|
||||
```bash
|
||||
# Create .agents/plans directory if it doesn't exist
|
||||
mkdir -p .agents/plans
|
||||
|
||||
# Write plan to file
|
||||
# Filename: .agents/plans/{kebab-case-descriptive-name}.md
|
||||
|
||||
# Commit the plan
|
||||
git add .agents/plans/{plan-name}.md
|
||||
git commit -m "Add implementation plan for {feature-name}"
|
||||
|
||||
# Push feature branch to GitHub
|
||||
git push -u origin feature-<descriptive-name>
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
### GitHub Comment Summary
|
||||
|
||||
Provide a final summary that will be automatically posted as a GitHub comment (you don't need to do that yourself). This should include:
|
||||
|
||||
```markdown
|
||||
## 📋 Implementation Plan Created
|
||||
|
||||
**Feature Branch:** `feature-<branch-name>`
|
||||
**Plan Location:** `.agents/plans/<plan-name>.md`
|
||||
|
||||
### Summary
|
||||
<Brief 2-3 sentence summary of what this feature does and why>
|
||||
|
||||
### Complexity Assessment
|
||||
**Complexity**: [Low/Medium/High]
|
||||
**Estimated Confidence**: [X/10] for one-pass implementation success
|
||||
|
||||
### Key Implementation Details
|
||||
- **Primary Systems**: <List main components affected>
|
||||
- **New Dependencies**: <Any new libraries required, or "None">
|
||||
- **Breaking Changes**: <Yes/No and explanation if yes>
|
||||
|
||||
### Implementation Approach
|
||||
<2-3 bullet points summarizing the approach>
|
||||
|
||||
### Risks & Considerations
|
||||
<Key risks or things to watch out for during implementation>
|
||||
|
||||
### Next Steps
|
||||
To implement this plan, use:
|
||||
```bash
|
||||
@remote-agent /command-invoke execute-github .agents/plans/<plan-name>.md feature-<branch-name>
|
||||
```
|
||||
|
||||
**Branch Status**: Plan committed and pushed to `feature-<branch-name>`
|
||||
**Ready for Implementation**: ✅
|
||||
```
|
||||
|
||||
## Quality Criteria
|
||||
|
||||
### Context Completeness ✓
|
||||
|
||||
- [ ] All necessary patterns identified and documented
|
||||
- [ ] External library usage documented with links
|
||||
- [ ] Integration points clearly mapped
|
||||
- [ ] Gotchas and anti-patterns captured
|
||||
- [ ] Every task has executable validation command
|
||||
|
||||
### Implementation Ready ✓
|
||||
|
||||
- [ ] Another developer could execute without additional context
|
||||
- [ ] Tasks ordered by dependency (can execute top-to-bottom)
|
||||
- [ ] Each task is atomic and independently testable
|
||||
- [ ] Pattern references include specific file:line numbers
|
||||
|
||||
### Pattern Consistency ✓
|
||||
|
||||
- [ ] Tasks follow existing codebase conventions
|
||||
- [ ] New patterns justified with clear rationale
|
||||
- [ ] No reinvention of existing patterns or utils
|
||||
- [ ] Testing approach matches project standards
|
||||
|
||||
### Information Density ✓
|
||||
|
||||
- [ ] No generic references (all specific and actionable)
|
||||
- [ ] URLs include section anchors when applicable
|
||||
- [ ] Task descriptions use codebase keywords
|
||||
- [ ] Validation commands are non interactive executable
|
||||
|
||||
### GitHub Integration ✓
|
||||
|
||||
- [ ] Feature branch created with proper naming convention
|
||||
- [ ] Plan committed to feature branch
|
||||
- [ ] Branch pushed to GitHub remote
|
||||
- [ ] Final summary formatted for GitHub comment
|
||||
|
||||
## Success Metrics
|
||||
|
||||
**One-Pass Implementation**: Execution agent can complete feature without additional research or clarification
|
||||
|
||||
**Validation Complete**: Every task has at least one working validation command
|
||||
|
||||
**Context Rich**: The Plan passes "No Prior Knowledge Test" - someone unfamiliar with codebase can implement using only Plan content
|
||||
|
||||
**GitHub Ready**: Plan is committed to feature branch and ready for GitHub-native workflow
|
||||
|
||||
**Confidence Score**: X/10 that execution will succeed on first attempt
|
||||
16
.env.example
16
.env.example
|
|
@ -33,9 +33,11 @@ WEBHOOK_SECRET=your_random_secret_string
|
|||
# Usernames are case-insensitive (octocat == Octocat)
|
||||
GITHUB_ALLOWED_USERS=octocat,monalisa
|
||||
|
||||
# Platforms
|
||||
TELEGRAM_BOT_TOKEN=<from @BotFather>
|
||||
DISCORD_BOT_TOKEN=<from Discord Developer Portal>
|
||||
# Platforms - set the tokens for the ones you want to use
|
||||
# Telegram - <get token from @BotFather>
|
||||
TELEGRAM_BOT_TOKEN=
|
||||
# Discord - <get token from Discord Developer Portal>
|
||||
DISCORD_BOT_TOKEN=
|
||||
|
||||
# Discord User Whitelist (optional - comma-separated user IDs)
|
||||
# When set, only listed Discord users can interact with the bot
|
||||
|
|
@ -55,7 +57,7 @@ DISCORD_STREAMING_MODE=batch # batch (default) | stream
|
|||
GITHUB_STREAMING_MODE=batch # batch (default) | stream
|
||||
|
||||
# Bot Display Name (shown in batch mode "starting" message)
|
||||
BOT_DISPLAY_NAME=The agent # e.g., "My-bot", "CodeBot", etc.
|
||||
BOT_DISPLAY_NAME=CodingAgent # e.g., "My-bot", "CodeBot", etc.
|
||||
|
||||
# GitHub Bot Mention (optional - for @mention detection in GitHub issues/PRs)
|
||||
# When set, the bot will respond to this mention name instead of the default @remote-agent
|
||||
|
|
@ -67,10 +69,10 @@ GITHUB_BOT_MENTION=remote-agent
|
|||
# RECOMMENDED: Use a path outside your project directory to avoid nested repos
|
||||
# Examples:
|
||||
# - /tmp/remote-agent-workspace (temporary, auto-cleaned on reboot - Linux/Mac)
|
||||
# - ~/remote-agent-workspace (persistent in home directory)
|
||||
# - C:\temp\remote-agent-workspace (Windows)
|
||||
# - ~/remote-agent-workspace (persistent in home directory - Linux/Mac)
|
||||
# - C:Users\[your-user-ID]\remote-agent-workspace (Windows)
|
||||
# AVOID: ./workspace (causes repo-inside-repo when working on this project)
|
||||
WORKSPACE_PATH=/tmp/remote-agent-workspace
|
||||
WORKSPACE_PATH=
|
||||
PORT=3000
|
||||
|
||||
# Concurrency
|
||||
|
|
|
|||
121
README.md
121
README.md
|
|
@ -74,16 +74,11 @@ The `WORKSPACE_PATH` determines where cloned repositories are stored. **Use a pa
|
|||
|
||||
```env
|
||||
# Recommended options
|
||||
WORKSPACE_PATH=/tmp/remote-agent-workspace # Temporary (auto-cleaned on reboot)
|
||||
WORKSPACE_PATH=~/remote-agent-workspace (persistent in home directory - Linux/Mac)
|
||||
# or
|
||||
WORKSPACE_PATH=~/remote-agent-workspace # Persistent in home directory
|
||||
WORKSPACE_PATH=C:Users\[your-user-ID]\remote-agent-workspace (Windows)
|
||||
```
|
||||
|
||||
**Why avoid `./workspace`?**
|
||||
- **Repo nesting**: When working on this repo's issues, clones nest inside the development directory
|
||||
- **Path confusion**: Similar paths like `remote-coding-agent` and `workspace/remote-coding-agent` are easy to mix up
|
||||
- **Git worktree conflicts**: `git worktree list` shows different results depending on which repo you're in
|
||||
|
||||
**Docker note**: Inside containers, the path is always `/workspace` (mapped from your host `WORKSPACE_PATH` in docker-compose.yml).
|
||||
|
||||
**Database Setup - Choose One:**
|
||||
|
|
@ -97,17 +92,26 @@ Set your remote connection string:
|
|||
DATABASE_URL=postgresql://user:password@host:5432/dbname
|
||||
```
|
||||
|
||||
Run migrations manually after first startup:
|
||||
**For fresh installations**, run the combined migration:
|
||||
|
||||
```bash
|
||||
# Download the migration file or use psql directly
|
||||
psql $DATABASE_URL < migrations/001_initial_schema.sql
|
||||
psql $DATABASE_URL < migrations/000_combined.sql
|
||||
```
|
||||
|
||||
This creates 3 tables:
|
||||
This creates 4 tables:
|
||||
- `remote_agent_codebases` - Repository metadata
|
||||
- `remote_agent_conversations` - Platform conversation tracking
|
||||
- `remote_agent_sessions` - AI session management
|
||||
- `remote_agent_command_templates` - Global command templates
|
||||
|
||||
**For updates to existing installations**, run only the migrations you haven't applied yet:
|
||||
|
||||
```bash
|
||||
# Check which migrations you've already run, then apply new ones:
|
||||
psql $DATABASE_URL < migrations/002_command_templates.sql
|
||||
psql $DATABASE_URL < migrations/003_add_worktree.sql
|
||||
psql $DATABASE_URL < migrations/004_worktree_sharing.sql
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
|
|
@ -120,7 +124,28 @@ Use the `with-db` profile for automatic PostgreSQL setup:
|
|||
DATABASE_URL=postgresql://postgres:postgres@postgres:5432/remote_coding_agent
|
||||
```
|
||||
|
||||
Database will be created automatically when you start with `docker compose --profile with-db`.
|
||||
**For fresh installations**, database schema is created automatically when you start with `docker compose --profile with-db`. The combined migration runs on first startup.
|
||||
|
||||
**For updates to existing Docker installations**, you need to manually run new migrations:
|
||||
|
||||
```bash
|
||||
# Connect to the running postgres container
|
||||
docker compose exec postgres psql -U postgres -d remote_coding_agent
|
||||
|
||||
# Then run the migrations you haven't applied yet
|
||||
\i /migrations/002_command_templates.sql
|
||||
\i /migrations/003_add_worktree.sql
|
||||
\i /migrations/004_worktree_sharing.sql
|
||||
\q
|
||||
```
|
||||
|
||||
Or from your host machine (requires `psql` installed):
|
||||
|
||||
```bash
|
||||
psql postgresql://postgres:postgres@localhost:5432/remote_coding_agent < migrations/002_command_templates.sql
|
||||
psql postgresql://postgres:postgres@localhost:5432/remote_coding_agent < migrations/003_add_worktree.sql
|
||||
psql postgresql://postgres:postgres@localhost:5432/remote_coding_agent < migrations/004_worktree_sharing.sql
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
|
|
@ -364,6 +389,78 @@ Interact by @mentioning `@remote-agent` in issues or PRs:
|
|||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><b>💬 Discord</b></summary>
|
||||
|
||||
**Create Discord Bot:**
|
||||
|
||||
1. Visit [Discord Developer Portal](https://discord.com/developers/applications)
|
||||
2. Click "New Application" → Enter a name → Click "Create"
|
||||
3. Go to the "Bot" tab in the left sidebar
|
||||
4. Click "Add Bot" → Confirm
|
||||
|
||||
**Get Bot Token:**
|
||||
|
||||
1. Under the Bot tab, click "Reset Token"
|
||||
2. Copy the token (starts with a long alphanumeric string)
|
||||
3. **Save it securely** - you won't be able to see it again
|
||||
|
||||
**Enable Message Content Intent (Required):**
|
||||
|
||||
1. Scroll down to "Privileged Gateway Intents"
|
||||
2. Enable **"Message Content Intent"** (required for the bot to read messages)
|
||||
3. Save changes
|
||||
|
||||
**Invite Bot to Your Server:**
|
||||
|
||||
1. Go to "OAuth2" → "URL Generator" in the left sidebar
|
||||
2. Under "Scopes", select:
|
||||
- ✓ `bot`
|
||||
3. Under "Bot Permissions", select:
|
||||
- ✓ Send Messages
|
||||
- ✓ Read Message History
|
||||
- ✓ Create Public Threads (optional, for thread support)
|
||||
- ✓ Send Messages in Threads (optional, for thread support)
|
||||
4. Copy the generated URL at the bottom
|
||||
5. Paste it in your browser and select your server
|
||||
6. Click "Authorize"
|
||||
|
||||
**Note:** You need "Manage Server" permission to add bots.
|
||||
|
||||
**Set environment variable:**
|
||||
|
||||
```env
|
||||
DISCORD_BOT_TOKEN=your_bot_token_here
|
||||
```
|
||||
|
||||
**Configure user whitelist (optional):**
|
||||
|
||||
To restrict bot access to specific users, enable Developer Mode in Discord:
|
||||
1. User Settings → Advanced → Enable "Developer Mode"
|
||||
2. Right-click on users → "Copy User ID"
|
||||
3. Add to environment:
|
||||
|
||||
```env
|
||||
DISCORD_ALLOWED_USER_IDS=123456789012345678,987654321098765432
|
||||
```
|
||||
|
||||
**Configure streaming mode (optional):**
|
||||
|
||||
```env
|
||||
DISCORD_STREAMING_MODE=batch # batch (default) | stream
|
||||
```
|
||||
|
||||
**For streaming mode details, see [Advanced Configuration](#advanced-configuration).**
|
||||
|
||||
**Usage:**
|
||||
|
||||
The bot responds to:
|
||||
- **Direct Messages**: Just send messages directly
|
||||
- **Server Channels**: @mention the bot (e.g., `@YourBotName help me with this code`)
|
||||
- **Threads**: Bot maintains context in thread conversations
|
||||
|
||||
</details>
|
||||
|
||||
---
|
||||
|
||||
### 4. Start the Application
|
||||
|
|
|
|||
|
|
@ -53,7 +53,10 @@ services:
|
|||
POSTGRES_PASSWORD: postgres
|
||||
volumes:
|
||||
- postgres_data:/var/lib/postgresql
|
||||
- ./migrations:/docker-entrypoint-initdb.d
|
||||
# Auto-run combined migration on first startup
|
||||
- ./migrations/000_combined.sql:/docker-entrypoint-initdb.d/000_combined.sql:ro
|
||||
# Mount all migrations for manual updates (accessible via /migrations inside container)
|
||||
- ./migrations:/migrations:ro
|
||||
ports:
|
||||
- "${POSTGRES_PORT:-5432}:5432"
|
||||
healthcheck:
|
||||
|
|
|
|||
83
migrations/000_combined.sql
Normal file
83
migrations/000_combined.sql
Normal file
|
|
@ -0,0 +1,83 @@
|
|||
-- Remote Coding Agent - Combined Schema
|
||||
-- Version: Combined (includes migrations 001-004)
|
||||
-- Description: Complete database schema (idempotent - safe to run multiple times)
|
||||
|
||||
-- ============================================================================
|
||||
-- Migration 001: Initial Schema
|
||||
-- ============================================================================
|
||||
|
||||
-- Table 1: Codebases
|
||||
CREATE TABLE IF NOT EXISTS remote_agent_codebases (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(255) NOT NULL,
|
||||
repository_url VARCHAR(500),
|
||||
default_cwd VARCHAR(500) NOT NULL,
|
||||
ai_assistant_type VARCHAR(20) DEFAULT 'claude',
|
||||
commands JSONB DEFAULT '{}'::jsonb,
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
updated_at TIMESTAMP DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Table 2: Conversations
|
||||
CREATE TABLE IF NOT EXISTS remote_agent_conversations (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
platform_type VARCHAR(20) NOT NULL,
|
||||
platform_conversation_id VARCHAR(255) NOT NULL,
|
||||
codebase_id UUID REFERENCES remote_agent_codebases(id),
|
||||
cwd VARCHAR(500),
|
||||
ai_assistant_type VARCHAR(20) DEFAULT 'claude',
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
updated_at TIMESTAMP DEFAULT NOW(),
|
||||
UNIQUE(platform_type, platform_conversation_id)
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_remote_agent_conversations_codebase ON remote_agent_conversations(codebase_id);
|
||||
|
||||
-- Table 3: Sessions
|
||||
CREATE TABLE IF NOT EXISTS remote_agent_sessions (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
conversation_id UUID REFERENCES remote_agent_conversations(id) ON DELETE CASCADE,
|
||||
codebase_id UUID REFERENCES remote_agent_codebases(id),
|
||||
ai_assistant_type VARCHAR(20) NOT NULL,
|
||||
assistant_session_id VARCHAR(255),
|
||||
active BOOLEAN DEFAULT true,
|
||||
metadata JSONB DEFAULT '{}'::jsonb,
|
||||
started_at TIMESTAMP DEFAULT NOW(),
|
||||
ended_at TIMESTAMP
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_remote_agent_sessions_conversation ON remote_agent_sessions(conversation_id, active);
|
||||
CREATE INDEX IF NOT EXISTS idx_remote_agent_sessions_codebase ON remote_agent_sessions(codebase_id);
|
||||
|
||||
-- ============================================================================
|
||||
-- Migration 002: Command Templates
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE IF NOT EXISTS remote_agent_command_templates (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(255) NOT NULL UNIQUE,
|
||||
description TEXT,
|
||||
content TEXT NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
updated_at TIMESTAMP DEFAULT NOW()
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_remote_agent_command_templates_name ON remote_agent_command_templates(name);
|
||||
|
||||
-- ============================================================================
|
||||
-- Migration 003: Add Worktree Support
|
||||
-- ============================================================================
|
||||
|
||||
ALTER TABLE remote_agent_conversations
|
||||
ADD COLUMN IF NOT EXISTS worktree_path VARCHAR(500);
|
||||
|
||||
COMMENT ON COLUMN remote_agent_conversations.worktree_path IS
|
||||
'Path to git worktree for this conversation. If set, AI works here instead of cwd.';
|
||||
|
||||
-- ============================================================================
|
||||
-- Migration 004: Worktree Sharing Index
|
||||
-- ============================================================================
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_remote_agent_conversations_worktree
|
||||
ON remote_agent_conversations(worktree_path)
|
||||
WHERE worktree_path IS NOT NULL;
|
||||
81
src/index.ts
81
src/index.ts
|
|
@ -20,10 +20,10 @@ import { classifyAndFormatError } from './utils/error-formatter';
|
|||
import { seedDefaultCommands } from './scripts/seed-commands';
|
||||
|
||||
async function main(): Promise<void> {
|
||||
console.log('[App] Starting Remote Coding Agent (Telegram + Claude MVP)');
|
||||
console.log('[App] Starting Remote Coding Agent');
|
||||
|
||||
// Validate required environment variables
|
||||
const required = ['DATABASE_URL', 'TELEGRAM_BOT_TOKEN'];
|
||||
const required = ['DATABASE_URL'];
|
||||
const missing = required.filter(v => !process.env[v]);
|
||||
if (missing.length > 0) {
|
||||
console.error('[App] Missing required environment variables:', missing.join(', '));
|
||||
|
|
@ -82,6 +82,20 @@ async function main(): Promise<void> {
|
|||
const testAdapter = new TestAdapter();
|
||||
await testAdapter.start();
|
||||
|
||||
// Check that at least one platform is configured
|
||||
const hasTelegram = Boolean(process.env.TELEGRAM_BOT_TOKEN);
|
||||
const hasDiscord = Boolean(process.env.DISCORD_BOT_TOKEN);
|
||||
const hasGitHub = Boolean(process.env.GITHUB_TOKEN && process.env.WEBHOOK_SECRET);
|
||||
|
||||
if (!hasTelegram && !hasDiscord && !hasGitHub) {
|
||||
console.error('[App] No platform adapters configured.');
|
||||
console.error('[App] You must configure at least one platform:');
|
||||
console.error('[App] - Telegram: Set TELEGRAM_BOT_TOKEN');
|
||||
console.error('[App] - Discord: Set DISCORD_BOT_TOKEN');
|
||||
console.error('[App] - GitHub: Set GITHUB_TOKEN and WEBHOOK_SECRET');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Initialize GitHub adapter (conditional)
|
||||
let github: GitHubAdapter | null = null;
|
||||
if (process.env.GITHUB_TOKEN && process.env.WEBHOOK_SECRET) {
|
||||
|
|
@ -289,36 +303,40 @@ async function main(): Promise<void> {
|
|||
console.log(`[Express] Health check server listening on port ${String(port)}`);
|
||||
});
|
||||
|
||||
// Initialize platform adapter (Telegram)
|
||||
const streamingMode = (process.env.TELEGRAM_STREAMING_MODE ?? 'stream') as 'stream' | 'batch';
|
||||
// TELEGRAM_BOT_TOKEN is validated above in required env vars check
|
||||
const telegram = new TelegramAdapter(process.env.TELEGRAM_BOT_TOKEN!, streamingMode);
|
||||
// Initialize Telegram adapter (conditional)
|
||||
let telegram: TelegramAdapter | null = null;
|
||||
if (process.env.TELEGRAM_BOT_TOKEN) {
|
||||
const streamingMode = (process.env.TELEGRAM_STREAMING_MODE ?? 'stream') as 'stream' | 'batch';
|
||||
telegram = new TelegramAdapter(process.env.TELEGRAM_BOT_TOKEN, streamingMode);
|
||||
|
||||
// Register message handler (auth is handled internally by adapter)
|
||||
telegram.onMessage(async ({ conversationId, message }) => {
|
||||
// Fire-and-forget: handler returns immediately, processing happens async
|
||||
lockManager
|
||||
.acquireLock(conversationId, async () => {
|
||||
await handleMessage(telegram, conversationId, message);
|
||||
})
|
||||
.catch(async error => {
|
||||
console.error('[Telegram] Failed to process message:', error);
|
||||
try {
|
||||
const userMessage = classifyAndFormatError(error as Error);
|
||||
await telegram.sendMessage(conversationId, userMessage);
|
||||
} catch (sendError) {
|
||||
console.error('[Telegram] Failed to send error message to user:', sendError);
|
||||
}
|
||||
});
|
||||
});
|
||||
// Register message handler (auth is handled internally by adapter)
|
||||
telegram.onMessage(async ({ conversationId, message }) => {
|
||||
// Fire-and-forget: handler returns immediately, processing happens async
|
||||
lockManager
|
||||
.acquireLock(conversationId, async () => {
|
||||
await handleMessage(telegram!, conversationId, message);
|
||||
})
|
||||
.catch(async error => {
|
||||
console.error('[Telegram] Failed to process message:', error);
|
||||
try {
|
||||
const userMessage = classifyAndFormatError(error as Error);
|
||||
await telegram!.sendMessage(conversationId, userMessage);
|
||||
} catch (sendError) {
|
||||
console.error('[Telegram] Failed to send error message to user:', sendError);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
// Start bot
|
||||
await telegram.start();
|
||||
// Start bot
|
||||
await telegram.start();
|
||||
} else {
|
||||
console.log('[Telegram] Adapter not initialized (missing TELEGRAM_BOT_TOKEN)');
|
||||
}
|
||||
|
||||
// Graceful shutdown
|
||||
const shutdown = (): void => {
|
||||
console.log('[App] Shutting down gracefully...');
|
||||
telegram.stop();
|
||||
telegram?.stop();
|
||||
discord?.stop();
|
||||
void pool.end().then(() => {
|
||||
console.log('[Database] Connection pool closed');
|
||||
|
|
@ -329,11 +347,14 @@ async function main(): Promise<void> {
|
|||
process.once('SIGINT', shutdown);
|
||||
process.once('SIGTERM', shutdown);
|
||||
|
||||
// Show active platforms
|
||||
const activePlatforms = [];
|
||||
if (telegram) activePlatforms.push('Telegram');
|
||||
if (discord) activePlatforms.push('Discord');
|
||||
if (github) activePlatforms.push('GitHub');
|
||||
|
||||
console.log('[App] Remote Coding Agent is ready!');
|
||||
console.log('[App] Send messages to your Telegram bot to get started');
|
||||
if (discord) {
|
||||
console.log('[App] Discord bot is also running');
|
||||
}
|
||||
console.log(`[App] Active platforms: ${activePlatforms.join(', ')}`);
|
||||
console.log(
|
||||
'[App] Test endpoint available: POST http://localhost:' + String(port) + '/test/message'
|
||||
);
|
||||
|
|
|
|||
|
|
@ -165,7 +165,7 @@ describe('orchestrator', () => {
|
|||
|
||||
expect(platform.sendMessage).toHaveBeenCalledWith(
|
||||
'chat-456',
|
||||
'No codebase configured. Use /clone first.'
|
||||
'No codebase configured. Use /clone for a new repo or /repos to list your current repos you can switch to.'
|
||||
);
|
||||
});
|
||||
|
||||
|
|
@ -265,7 +265,7 @@ describe('orchestrator', () => {
|
|||
|
||||
expect(platform.sendMessage).toHaveBeenCalledWith(
|
||||
'chat-456',
|
||||
'No codebase configured. Use /clone first.'
|
||||
'No codebase configured. Use /clone for a new repo or /repos to list your current repos you can switch to.'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
|
|
|||
|
|
@ -124,7 +124,7 @@ export async function handleMessage(
|
|||
const commandArgs = args.slice(1);
|
||||
|
||||
if (!conversation.codebase_id) {
|
||||
await platform.sendMessage(conversationId, 'No codebase configured. Use /clone first.');
|
||||
await platform.sendMessage(conversationId, 'No codebase configured. Use /clone for a new repo or /repos to list your current repos you can switch to.');
|
||||
return;
|
||||
}
|
||||
|
||||
|
|
@ -196,7 +196,7 @@ export async function handleMessage(
|
|||
} else {
|
||||
// Regular message - route through router template
|
||||
if (!conversation.codebase_id) {
|
||||
await platform.sendMessage(conversationId, 'No codebase configured. Use /clone first.');
|
||||
await platform.sendMessage(conversationId, 'No codebase configured. Use /clone for a new repo or /repos to list your current repos you can switch to.');
|
||||
return;
|
||||
}
|
||||
|
||||
|
|
@ -257,9 +257,13 @@ export async function handleMessage(
|
|||
}
|
||||
|
||||
// Check for plan→execute transition (requires NEW session per PRD)
|
||||
// Note: The planning command is named 'plan-feature', not 'plan'
|
||||
// Supports both regular and GitHub workflows:
|
||||
// - plan-feature → execute (regular workflow)
|
||||
// - plan-feature-github → execute-github (GitHub workflow with staging)
|
||||
const needsNewSession =
|
||||
commandName === 'execute' && session?.metadata?.lastCommand === 'plan-feature';
|
||||
|
||||
(commandName === 'execute' && session?.metadata?.lastCommand === 'plan-feature') ||
|
||||
(commandName === 'execute-github' && session?.metadata?.lastCommand === 'plan-feature-github');
|
||||
|
||||
if (needsNewSession) {
|
||||
console.log('[Orchestrator] Plan→Execute transition: creating new session');
|
||||
|
|
|
|||
Loading…
Reference in a new issue