* feat: Phase 3 - Database abstraction layer and CLI isolation Part A: Database Abstraction Layer - Add IDatabase interface supporting PostgreSQL and SQLite - Create PostgresAdapter wrapping pg Pool with same interface - Create SqliteAdapter using bun:sqlite with auto-schema init - Add SqlDialect helpers for database-specific SQL generation - Update connection.ts with auto-detection (DATABASE_URL → Postgres, else SQLite) - Update isolation-environments.ts to use dialect helpers - CLI now works without DATABASE_URL (uses ~/.archon/archon.db) Part B: CLI Isolation Integration - Add --branch/-b flag to create/reuse worktrees for workflows - Add --no-worktree flag to run on branch directly without isolation - Add isolation list command to show all active worktrees - Add isolation cleanup command to remove stale environments - Auto-register codebase when using --branch from git repo New git utilities: - findRepoRoot: Find git repository root from any path - getRemoteUrl: Get origin remote URL - checkout: Checkout branch (creating if needed) * docs: Update documentation for Phase 3 - SQLite support and CLI isolation * fix: Address all PR review issues for database abstraction Critical fixes: - Use dialect helpers (now(), jsonMerge, jsonArrayContains) in all DB operations - Replace hardcoded PostgreSQL NOW(), ::jsonb, and JSON operators - Add rowCount validation to updateStatus() and updateMetadata() - Improve PostgreSQL pool error logging with structured context Important fixes: - Add explicit null check in getDialect() instead of non-null assertion - Add warning for unsupported UPDATE/DELETE RETURNING in SQLite - Throw error in extractTableName() on parse failure instead of empty string - Track codebaseLookupError to provide clearer errors when --branch fails - Couple IDatabase with SqlDialect via readonly sql property Code simplifications: - Extract loadWorkflows() helper to DRY duplicate error handling - Fix N+1 query in getCodebases() by including repository_url in JOIN - Cache sql.toUpperCase() to avoid redundant calls All changes verified with: - Type-check passes - All 1082 tests pass - CLI commands tested with both SQLite and PostgreSQL * docs: Add SQLite support to getting-started, configuration, and architecture guides * refactor: Simplify Phase 3 code for clarity and maintainability - sqlite.ts: Remove unused UPDATE branch from extractTableName, consolidate RETURNING clause handling into single condition - isolation.ts: Extract CodebaseInfo interface to reduce type duplication - workflow.ts: Extract actualBranchName variable for clearer intent - isolation-environments.ts: Rename variables for semantic clarity (staleActivityThreshold/staleCreationThreshold vs nowMinusDays1/2) * fix: Address all remaining PR review issues - SQLite: Throw error for UPDATE/DELETE RETURNING instead of warning - SQLite: Replace deprecated db.exec() with db.run() - Worktree: Throw for fatal errors (permission denied, not a git repo) - Types: Add import type for type-only imports in 4 db modules - Codebases: Add null check validation to createCodebase return - WorkflowRunOptions: Add JSDoc documenting constraint behavior - QueryResult: Add comment noting fields should be treated as readonly - Connection: Improve getDialect() error message with actionable details * fix: Complete type safety improvements for Phase 3 - QueryResult: Make rows and rowCount readonly to prevent mutation - WorkflowRunOptions: Use discriminated union to prevent invalid states (noWorktree can only be set when branchName is provided) - Update all database functions to return readonly arrays - Fix CLI call site to properly construct options object * fix: Add getDialect mock to database tests Tests were failing because they expected hardcoded PostgreSQL SQL (NOW(), metadata || $1::jsonb) but the code now uses dialect helpers. - Add mockPostgresDialect to test/mocks/database.ts - Update all db test files to mock getDialect() returning PostgreSQL dialect - Tests now properly verify the SQL generated by dialect helpers
35 KiB
Dynamous Remote Coding Agent
Control AI coding assistants (Claude Code, Codex) remotely from Telegram, GitHub, and more. Built for developers who want to code from anywhere with persistent sessions and flexible workflows/systems.
Quick Start: Core Configuration • AI Assistant Setup • Platform Setup • Start the App • Usage Guide
Features
- Multi-Platform Support: Interact via Telegram, Slack, Discord, GitHub issues/PRs, and more
- Multiple AI Assistants: Choose between Claude Code or Codex (or both)
- Persistent Sessions: Sessions survive container restarts with full context preservation
- Codebase Management: Clone and work with any GitHub repository
- Flexible Streaming: Real-time or batch message delivery per platform
- Generic Command System: User-defined commands versioned with Git
- Docker Ready: Simple deployment with Docker Compose
Prerequisites
System Requirements:
- Docker & Docker Compose (for deployment)
- Bun 1.0+ (for local development)
Accounts Required:
- GitHub account (for repository cloning via
/clonecommand) - At least one of: Claude Pro/Max subscription OR Codex account
- At least one of: Telegram, Slack, Discord, or GitHub account (for interaction)
Quick Start
Option 1: Docker (Not working yet => works when repo goes public)
# 1. Get the files
mkdir remote-agent && cd remote-agent
curl -fsSL https://raw.githubusercontent.com/dynamous-community/remote-coding-agent/main/deploy/docker-compose.yml -o docker-compose.yml
curl -fsSL https://raw.githubusercontent.com/dynamous-community/remote-coding-agent/main/deploy/.env.example -o .env
# 2. Configure (edit .env with your tokens)
nano .env
# 3. Run
docker compose up -d --profile <yourprofile>
# 4. Check it's working
curl http://localhost:3000/health
Option 2: Local Development
# 1. Clone and install
git clone https://github.com/dynamous-community/remote-coding-agent
cd remote-coding-agent
bun install
# 2. Configure
cp .env.example .env
nano .env # Add your tokens
# 3. Start database
docker compose --profile with-db up -d postgres
# 4. Run migrations
psql $DATABASE_URL < migrations/000_combined.sql
# 5. Start with hot reload
bun run dev
# 6. Validate setup
bun run validate
Option 3: Self-Hosted Production
See Cloud Deployment Guide for deploying to:
- DigitalOcean, Linode, AWS EC2, or any VPS
- With automatic HTTPS via Caddy
Directory Structure
The app uses ~/.archon/ for all managed files:
~/.archon/
├── workspaces/ # Cloned repositories (auto-synced before worktree creation)
├── worktrees/ # Git worktrees for isolation
├── archon.db # SQLite database (when DATABASE_URL not set)
└── config.yaml # Optional: global configuration
On Windows: C:\Users\<username>\.archon\
In Docker: /.archon/
See Configuration Guide for customization options.
Setup Guide
Get started:
git clone https://github.com/dynamous-community/remote-coding-agent
cd remote-coding-agent
bun install
1. Core Configuration (Required)
Create environment file:
cp .env.example .env
Set these required variables:
| Variable | Purpose | How to Get |
|---|---|---|
DATABASE_URL |
PostgreSQL connection (optional) | See database options below. Omit to use SQLite |
GH_TOKEN |
Repository cloning | Generate token with repo scope |
GITHUB_TOKEN |
Same as GH_TOKEN |
Use same token value |
PORT |
HTTP server port | Default: 3000 (optional) |
ARCHON_HOME |
(Optional) Override base directory | Default: ~/.archon |
GitHub Personal Access Token Setup:
- Visit GitHub Settings > Personal Access Tokens
- Click "Generate new token (classic)" → Select scope:
repo - Copy token (starts with
ghp_...) and set both variables:
# .env
GH_TOKEN=ghp_your_token_here
GITHUB_TOKEN=ghp_your_token_here # Same value
Note: Repository clones are stored in ~/.archon/workspaces/ by default (or /.archon/workspaces/ in Docker). Set ARCHON_HOME to override the base directory.
Database Setup - Choose One:
Option A: SQLite (Default - No Setup Required)
Simply omit the DATABASE_URL variable from your .env file. The app will automatically:
- Create a SQLite database at
~/.archon/archon.db - Initialize the schema on first run
- Use this database for all operations
Pros:
- Zero configuration required
- No external database needed
- Perfect for single-user CLI usage
Cons:
- Not suitable for multi-container deployments
- No network access (CLI and server can't share database across different hosts)
Option B: Remote PostgreSQL (Supabase, Neon)
Set your remote connection string:
DATABASE_URL=postgresql://user:password@host:5432/dbname
For fresh installations, run the combined migration:
psql $DATABASE_URL < migrations/000_combined.sql
This creates 5 tables:
remote_agent_codebases- Repository metadataremote_agent_conversations- Platform conversation trackingremote_agent_sessions- AI session managementremote_agent_command_templates- Global command templatesremote_agent_isolation_environments- Worktree isolation tracking
For updates to existing installations, run only the migrations you haven't applied yet:
# Check which migrations you've already run, then apply new ones:
psql $DATABASE_URL < migrations/002_command_templates.sql
psql $DATABASE_URL < migrations/003_add_worktree.sql
psql $DATABASE_URL < migrations/004_worktree_sharing.sql
psql $DATABASE_URL < migrations/006_isolation_environments.sql
psql $DATABASE_URL < migrations/007_drop_legacy_columns.sql
Option C: Local PostgreSQL (via Docker)
Use the with-db profile for automatic PostgreSQL setup:
DATABASE_URL=postgresql://postgres:postgres@postgres:5432/remote_coding_agent
For fresh installations, database schema is created automatically when you start with docker compose --profile with-db. The combined migration runs on first startup.
For updates to existing Docker installations, you need to manually run new migrations:
# Connect to the running postgres container
docker compose exec postgres psql -U postgres -d remote_coding_agent
# Then run the migrations you haven't applied yet
\i /migrations/002_command_templates.sql
\i /migrations/003_add_worktree.sql
\i /migrations/004_worktree_sharing.sql
\i /migrations/006_isolation_environments.sql
\i /migrations/007_drop_legacy_columns.sql
\q
Or from your host machine (requires psql installed):
psql postgresql://postgres:postgres@localhost:5432/remote_coding_agent < migrations/002_command_templates.sql
psql postgresql://postgres:postgres@localhost:5432/remote_coding_agent < migrations/003_add_worktree.sql
psql postgresql://postgres:postgres@localhost:5432/remote_coding_agent < migrations/004_worktree_sharing.sql
psql postgresql://postgres:postgres@localhost:5432/remote_coding_agent < migrations/006_isolation_environments.sql
psql postgresql://postgres:postgres@localhost:5432/remote_coding_agent < migrations/007_drop_legacy_columns.sql
2. AI Assistant Setup (Choose At Least One)
You must configure at least one AI assistant. Both can be configured if desired.
🤖 Claude Code
Recommended for Claude Pro/Max subscribers.
Authentication Options:
Claude Code supports three authentication modes via CLAUDE_USE_GLOBAL_AUTH:
- Global Auth (set to
true): Uses credentials fromclaude /login - Explicit Tokens (set to
false): Uses tokens from env vars below - Auto-Detect (not set): Uses tokens if present in env, otherwise global auth
Option 1: Global Auth (Recommended)
CLAUDE_USE_GLOBAL_AUTH=true
Option 2: OAuth Token
# Install Claude Code CLI first: https://docs.claude.com/claude-code/installation
claude setup-token
# Copy the token starting with sk-ant-oat01-...
CLAUDE_CODE_OAUTH_TOKEN=sk-ant-oat01-xxxxx
Option 3: API Key (Pay-per-use)
- Visit console.anthropic.com/settings/keys
- Create a new key (starts with
sk-ant-)
CLAUDE_API_KEY=sk-ant-xxxxx
Set as default assistant (optional):
If you want Claude to be the default AI assistant for new conversations without codebase context, set this environment variable:
DEFAULT_AI_ASSISTANT=claude
🤖 Codex
Authenticate with Codex CLI:
# Install Codex CLI first: https://docs.codex.com/installation
codex login
# Follow browser authentication flow
Extract credentials from auth file:
On Linux/Mac:
cat ~/.codex/auth.json
On Windows:
type %USERPROFILE%\.codex\auth.json
Set all four environment variables:
CODEX_ID_TOKEN=eyJhbGc...
CODEX_ACCESS_TOKEN=eyJhbGc...
CODEX_REFRESH_TOKEN=rt_...
CODEX_ACCOUNT_ID=6a6a7ba6-...
Set as default assistant (optional):
If you want Codex to be the default AI assistant for new conversations without codebase context, set this environment variable:
DEFAULT_AI_ASSISTANT=codex
How Assistant Selection Works:
- Assistant type is set per codebase (auto-detected from
.codex/or.claude/folders) - Once a conversation starts, the assistant type is locked for that conversation
DEFAULT_AI_ASSISTANT(optional) is used only for new conversations without codebase context
3. Platform Adapter Setup (Choose At Least One)
You must configure at least one platform to interact with your AI assistant.
💬 Telegram
Create Telegram Bot:
- Message @BotFather on Telegram
- Send
/newbotand follow the prompts - Copy the bot token (format:
123456789:ABCdefGHIjklMNOpqrsTUVwxyz)
Set environment variable:
TELEGRAM_BOT_TOKEN=123456789:ABCdefGHI...
Configure streaming mode (optional):
TELEGRAM_STREAMING_MODE=stream # stream (default) | batch
For streaming mode details, see Advanced Configuration.
💼 Slack
Create Slack App with Socket Mode:
See the detailed Slack Setup Guide for step-by-step instructions.
Quick Overview:
- Create app at api.slack.com/apps
- Enable Socket Mode and get App Token (
xapp-...) - Add Bot Token Scopes:
app_mentions:read,chat:write,channels:history,im:history,im:write - Subscribe to events:
app_mention,message.im - Install to workspace and get Bot Token (
xoxb-...)
Set environment variables:
SLACK_BOT_TOKEN=xoxb-your-bot-token
SLACK_APP_TOKEN=xapp-your-app-token
Optional configuration:
# Restrict to specific users (comma-separated Slack user IDs)
SLACK_ALLOWED_USER_IDS=U1234ABCD,W5678EFGH
# Streaming mode
SLACK_STREAMING_MODE=batch # batch (default) | stream
Usage:
Interact by @mentioning your bot in channels or DM directly:
@your-bot /clone https://github.com/user/repo
@your-bot /status
Thread replies maintain conversation context, enabling workflows like:
- Clone repo in main channel
- Continue work in thread
- Use
/worktreefor parallel development
🐙 GitHub Webhooks
Requirements:
- GitHub repository with issues enabled
GITHUB_TOKENalready set in Core Configuration above- Public endpoint for webhooks (see ngrok setup below for local development)
Step 1: Generate Webhook Secret
On Linux/Mac:
openssl rand -hex 32
On Windows (PowerShell):
-join ((1..32) | ForEach-Object { '{0:x2}' -f (Get-Random -Maximum 256) })
Save this secret - you'll need it for steps 3 and 4.
Step 2: Expose Local Server (Development Only)
Using ngrok (Free Tier)
# Install ngrok: https://ngrok.com/download
# Or: choco install ngrok (Windows)
# Or: brew install ngrok (Mac)
# Start tunnel
ngrok http 3000
# Copy the HTTPS URL (e.g., https://abc123.ngrok-free.app)
# ⚠️ Free tier URLs change on restart
Keep this terminal open while testing.
Using Cloudflare Tunnel (Persistent URLs)
# Install: https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/install-and-setup/
cloudflared tunnel --url http://localhost:3000
# Get persistent URL from Cloudflare dashboard
Persistent URLs survive restarts.
For production deployments, use your deployed server URL (no tunnel needed).
Step 3: Configure GitHub Webhook
Go to your repository settings:
- Navigate to:
https://github.com/owner/repo/settings/hooks - Click "Add webhook"
- Note: For multiple repositories, you'll need to add the webhook to each one individually
Webhook Configuration:
| Field | Value |
|---|---|
| Payload URL | Local: https://abc123.ngrok-free.app/webhooks/githubProduction: https://your-domain.com/webhooks/github |
| Content type | application/json |
| Secret | Paste the secret from Step 1 |
| SSL verification | Enable SSL verification (recommended) |
| Events | Select "Let me select individual events": ✓ Issues ✓ Issue comments ✓ Pull requests |
Click "Add webhook" and verify it shows a green checkmark after delivery.
Step 4: Set Environment Variables
WEBHOOK_SECRET=your_secret_from_step_1
Important: The WEBHOOK_SECRET must match exactly what you entered in GitHub's webhook configuration.
Step 5: Configure Streaming (Optional)
GITHUB_STREAMING_MODE=batch # batch (default) | stream
For streaming mode details, see Advanced Configuration.
Usage:
Interact by @mentioning @Archon in issues or PRs:
@Archon can you analyze this bug?
@Archon /command-invoke prime
@Archon review this implementation
First mention behavior:
- Automatically clones the repository to
/.archon/workspaces/ - Detects and loads commands from
.archon/commands/if present - Injects full issue/PR context for the AI assistant
Subsequent mentions:
- Resumes existing conversation
- Maintains full context across comments
💬 Discord
Create Discord Bot:
- Visit Discord Developer Portal
- Click "New Application" → Enter a name → Click "Create"
- Go to the "Bot" tab in the left sidebar
- Click "Add Bot" → Confirm
Get Bot Token:
- Under the Bot tab, click "Reset Token"
- Copy the token (starts with a long alphanumeric string)
- Save it securely - you won't be able to see it again
Enable Message Content Intent (Required):
- Scroll down to "Privileged Gateway Intents"
- Enable "Message Content Intent" (required for the bot to read messages)
- Save changes
Invite Bot to Your Server:
- Go to "OAuth2" → "URL Generator" in the left sidebar
- Under "Scopes", select:
- ✓
bot
- ✓
- Under "Bot Permissions", select:
- ✓ Send Messages
- ✓ Read Message History
- ✓ Create Public Threads (optional, for thread support)
- ✓ Send Messages in Threads (optional, for thread support)
- Copy the generated URL at the bottom
- Paste it in your browser and select your server
- Click "Authorize"
Note: You need "Manage Server" permission to add bots.
Set environment variable:
DISCORD_BOT_TOKEN=your_bot_token_here
Configure user whitelist (optional):
To restrict bot access to specific users, enable Developer Mode in Discord:
- User Settings → Advanced → Enable "Developer Mode"
- Right-click on users → "Copy User ID"
- Add to environment:
DISCORD_ALLOWED_USER_IDS=123456789012345678,987654321098765432
Configure streaming mode (optional):
DISCORD_STREAMING_MODE=batch # batch (default) | stream
For streaming mode details, see Advanced Configuration.
Usage:
The bot responds to:
- Direct Messages: Just send messages directly
- Server Channels: @mention the bot (e.g.,
@YourBotName help me with this code) - Threads: Bot maintains context in thread conversations
4. Start the Application
Choose the Docker Compose profile based on your database setup:
Option A: With Remote PostgreSQL (Supabase, Neon, etc.)
Starts only the app container (requires DATABASE_URL set to remote database in .env):
# Start app container
docker compose --profile external-db up -d --build
# View logs
docker compose logs -f app
Option B: With Local PostgreSQL (Docker)
Starts both the app and PostgreSQL containers:
# Start containers
docker compose --profile with-db up -d --build
# Wait for startup (watch logs)
docker compose logs -f app-with-db
# Database tables are created automatically via init script
Option C: Local Development (No Docker)
Run directly with Bun (requires local PostgreSQL or remote DATABASE_URL in .env):
bun install # First time only
bun run dev
Stop the application:
docker compose --profile external-db down # If using Option A
docker compose --profile with-db down # If using Option B
Usage
Available Commands
Once your platform adapter is running, you can use these commands. Type /help to see this list.
Command Templates (Global)
| Command | Description |
|---|---|
/<name> [args] |
Invoke a template directly (e.g., /plan "Add dark mode") |
/templates |
List all available templates |
/template-add <name> <path> |
Add template from file |
/template-delete <name> |
Remove a template |
Codebase Commands (Per-Project)
| Command | Description |
|---|---|
/command-set <name> <path> [text] |
Register a command from file |
/load-commands <folder> |
Bulk load commands (recursive) |
/command-invoke <name> [args] |
Execute a codebase command |
/commands |
List registered commands |
Note: Commands use relative paths (e.g.,
.archon/commands/plan.md)
Codebase Management
| Command | Description |
|---|---|
/clone <repo-url> |
Clone repository |
/repos |
List repositories (numbered) |
/repo <#|name> [pull] |
Switch repo (auto-loads commands) |
/repo-remove <#|name> |
Remove repo and codebase record |
/getcwd |
Show working directory |
/setcwd <path> |
Set working directory |
Tip: Use
/repofor quick switching between cloned repos,/setcwdfor manual paths.
Worktrees (Isolation)
| Command | Description |
|---|---|
/worktree create <branch> |
Create isolated worktree |
/worktree list |
Show worktrees for this repo |
/worktree remove [--force] |
Remove current worktree |
/worktree cleanup merged|stale |
Clean up worktrees |
/worktree orphans |
Show all worktrees from git |
Workflows
| Command | Description |
|---|---|
/workflow list |
Show available workflows |
/workflow reload |
Reload workflow definitions |
/workflow status |
Show running workflow details |
/workflow cancel |
Cancel running workflow |
Note: Workflows are YAML files in
.archon/workflows/
Session Management
| Command | Description |
|---|---|
/status |
Show conversation state |
/reset |
Clear session completely |
/reset-context |
Reset AI context, keep worktree |
/help |
Show all commands |
Setup
| Command | Description |
|---|---|
/init |
Create .archon structure in current repo |
Example Workflow (Telegram)
Clone a Repository
You: /clone https://github.com/user/my-project
Bot: Repository cloned successfully!
Repository: my-project
✓ Copied 16 default commands
✓ Copied 8 default workflows
Session reset - starting fresh on next message.
You can now start asking questions about the code.
Note: Default commands and workflows are automatically copied to new repos. If the repo already has
.archon/commands/or.archon/workflows/, existing files are preserved. To opt out, setdefaults.copyDefaults: falsein the repo's.archon/config.yaml.
Ask Questions Directly
You: What's the structure of this repo?
Bot: [Claude analyzes and responds...]
Create Custom Commands (Optional)
You: /init
Bot: Created .archon structure:
.archon/
├── config.yaml
└── commands/
└── example.md
Use /load-commands .archon/commands to register commands.
You can then create your own commands in .archon/commands/ and load them with /load-commands.
Check Status
You: /status
Bot: Platform: telegram
AI Assistant: claude
Codebase: my-project
Repository: https://github.com/user/my-project
Repository: my-project @ main
Worktrees: 0/10
Work in Isolation with Worktrees
You: /worktree create feature-auth
Bot: Worktree created!
Branch: feature-auth
Path: feature-auth/
This conversation now works in isolation.
Run dependency install if needed (e.g., bun install).
Reset Session
You: /reset
Bot: Session cleared. Starting fresh on next message.
Codebase configuration preserved.
Example Workflow (GitHub)
Create an issue or comment on an existing issue/PR:
@your-bot-name can you help me understand the authentication flow?
Bot responds with analysis. Continue the conversation:
@your-bot-name can you create a sequence diagram for this?
Bot maintains context and provides the diagram.
Advanced Configuration
Streaming Modes Explained
Stream Mode
Messages are sent in real-time as the AI generates responses.
Configuration:
TELEGRAM_STREAMING_MODE=stream
GITHUB_STREAMING_MODE=stream
Pros:
- Real-time feedback and progress indication
- More interactive and engaging
- See AI reasoning as it works
Cons:
- More API calls to platform
- May hit rate limits with very long responses
- Creates many messages/comments
Best for: Interactive chat platforms (Telegram)
Batch Mode
Only the final summary message is sent after AI completes processing.
Configuration:
TELEGRAM_STREAMING_MODE=batch
GITHUB_STREAMING_MODE=batch
Pros:
- Single coherent message/comment
- Fewer API calls
- No spam or clutter
Cons:
- No progress indication during processing
- Longer wait for first response
- Can't see intermediate steps
Best for: Issue trackers and async platforms (GitHub)
Concurrency Settings
Control how many conversations the system processes simultaneously:
MAX_CONCURRENT_CONVERSATIONS=10 # Default: 10
How it works:
- Conversations are processed with a lock manager
- If max concurrent limit reached, new messages are queued
- Prevents resource exhaustion and API rate limits
- Each conversation maintains its own independent context
Check current load:
curl http://localhost:3000/health/concurrency
Response:
{
"status": "ok",
"active": 3,
"queued": 0,
"maxConcurrent": 10
}
Tuning guidance:
- Low resources: Set to 3-5
- Standard: Default 10 works well
- High resources: Can increase to 20-30 (monitor API limits)
Health Check Endpoints
The application exposes health check endpoints for monitoring:
Basic Health Check:
curl http://localhost:3000/health
Returns: {"status":"ok"}
Database Connectivity:
curl http://localhost:3000/health/db
Returns: {"status":"ok","database":"connected"}
Concurrency Status:
curl http://localhost:3000/health/concurrency
Returns: {"status":"ok","active":0,"queued":0,"maxConcurrent":10}
Use cases:
- Docker healthcheck configuration
- Load balancer health checks
- Monitoring and alerting systems (Prometheus, Datadog, etc.)
- CI/CD deployment verification
Custom Command System
Create your own commands by adding markdown files to your codebase:
1. Create command file:
mkdir -p .archon/commands
cat > .archon/commands/analyze.md << 'EOF'
You are an expert code analyzer.
Analyze the following aspect of the codebase: $1
Provide:
1. Current implementation analysis
2. Potential issues or improvements
3. Best practices recommendations
Focus area: $ARGUMENTS
EOF
2. Load commands:
/load-commands .archon/commands
3. Invoke your command:
/command-invoke analyze "security vulnerabilities"
Variable substitution:
$1,$2,$3, etc. - Positional arguments$ARGUMENTS- All arguments as a single string$PLAN- Previous plan from session metadata$IMPLEMENTATION_SUMMARY- Previous execution summary
Commands are version-controlled with your codebase, not stored in the database.
Workflows (Multi-Step Automation)
Workflows are YAML files that define multi-step AI processes. They can be step-based (sequential commands) or loop-based (autonomous iteration).
Location: .archon/workflows/
Example step-based workflow (.archon/workflows/fix-github-issue.yaml):
name: fix-github-issue
description: |
Use when: User wants to FIX or RESOLVE a GitHub issue.
Does: Investigates root cause -> creates plan -> makes code changes -> creates PR.
model: sonnet # Optional: provider inherited from .archon/config.yaml
steps:
- command: investigate-issue
- command: implement-issue
clearContext: true
Example loop-based workflow (autonomous iteration):
name: ralph-loop
description: Execute plan until all validations pass
model: sonnet # Optional: provider inherited from .archon/config.yaml
loop:
until: "All validations pass"
max_iterations: 10
fresh_context: true
prompt: |
Continue implementing the plan. Run validation after each change.
Signal completion with: "All validations pass"
How workflows are invoked:
- AI routes to workflows automatically based on user intent
- Workflows use commands defined in
.archon/commands/ - Only one workflow can run per conversation at a time
Managing workflows:
/workflow list # Show available workflows
/workflow reload # Reload definitions after editing
/workflow cancel # Cancel a running workflow
Architecture
System Overview
┌─────────────────────────────────────────────────────────┐
│ Platform Adapters (Telegram, Slack, Discord, GitHub) │
└──────────────────────────┬──────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ Orchestrator │
│ (Message Routing & Context Management) │
└─────────────┬───────────────────────────┬───────────────┘
│ │
┌───────┴────────┐ ┌───────┴────────┐
│ │ │ │
▼ ▼ ▼ ▼
┌───────────┐ ┌────────────┐ ┌──────────────────────────┐
│ Command │ │ Workflow │ │ AI Assistant Clients │
│ Handler │ │ Executor │ │ (Claude / Codex) │
│ (Slash) │ │ (YAML) │ │ │
└───────────┘ └────────────┘ └──────────────────────────┘
│ │ │
└──────────────┴──────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ PostgreSQL (6 Tables) │
│ Codebases • Conversations • Sessions • Workflow Runs │
│ Command Templates • Isolation Environments │
└─────────────────────────────────────────────────────────┘
Key Design Patterns
- Adapter Pattern: Platform-agnostic via
IPlatformAdapterinterface - Strategy Pattern: Swappable AI assistants via
IAssistantClientinterface - Session Persistence: AI context survives restarts via database storage
- Generic Commands: User-defined markdown commands versioned with Git
- Workflow Engine: YAML-based multi-step automation with step and loop modes
- Worktree Isolation: Git worktrees enable parallel work per conversation, auto-synced with origin before creation
- Concurrency Control: Lock manager prevents race conditions
Database Schema
6 tables with `remote_agent_` prefix
-
remote_agent_codebases- Repository metadata- Commands stored as JSONB:
{command_name: {path, description}} - AI assistant type per codebase
- Default working directory
- Commands stored as JSONB:
-
remote_agent_conversations- Platform conversation tracking- Platform type + conversation ID (unique constraint)
- Linked to codebase via foreign key
- AI assistant type locked at creation
-
remote_agent_sessions- AI session management- Active session flag (one per conversation)
- Session ID for resume capability
- Metadata JSONB for command context
-
remote_agent_command_templates- Global command templates- Shared command definitions (like
/plan,/commit) - Available across all codebases
- Shared command definitions (like
-
remote_agent_isolation_environments- Worktree isolation- Tracks git worktrees per issue/PR
- Enables worktree sharing between linked issues and PRs
-
remote_agent_workflow_runs- Workflow execution tracking- Tracks active workflows per conversation
- Prevents concurrent workflow execution
- Stores workflow state and step progress
Troubleshooting
Bot Not Responding
Check if application is running:
docker compose ps
# Should show 'app' or 'app-with-db' with state 'Up'
Check application logs:
docker compose logs -f app # If using --profile external-db
docker compose logs -f app-with-db # If using --profile with-db
Verify bot token:
# In your .env file
cat .env | grep TELEGRAM_BOT_TOKEN
Test with health check:
curl http://localhost:3000/health
# Expected: {"status":"ok"}
Database Connection Errors
Check database health:
curl http://localhost:3000/health/db
# Expected: {"status":"ok","database":"connected"}
For local PostgreSQL (with-db profile):
# Check if postgres container is running
docker compose ps postgres
# Check postgres logs
docker compose logs -f postgres
# Test direct connection
docker compose exec postgres psql -U postgres -c "SELECT 1"
For remote PostgreSQL:
# Verify DATABASE_URL
echo $DATABASE_URL
# Test connection directly
psql $DATABASE_URL -c "SELECT 1"
Verify tables exist:
# For local postgres
docker compose exec postgres psql -U postgres -d remote_coding_agent -c "\dt"
# Should show: remote_agent_codebases, remote_agent_conversations, remote_agent_sessions,
# remote_agent_command_templates, remote_agent_isolation_environments
Clone Command Fails
Verify GitHub token:
cat .env | grep GH_TOKEN
# Should have both GH_TOKEN and GITHUB_TOKEN set
Test token validity:
# Test GitHub API access
curl -H "Authorization: token $GH_TOKEN" https://api.github.com/user
Check workspace permissions:
# Use the service name matching your profile
docker compose exec app ls -la /.archon/workspaces # --profile external-db
docker compose exec app-with-db ls -la /.archon/workspaces # --profile with-db
Try manual clone:
docker compose exec app git clone https://github.com/user/repo /.archon/workspaces/test-repo
# Or app-with-db if using --profile with-db
GitHub Webhook Not Triggering
Verify webhook delivery:
- Go to your webhook settings in GitHub
- Click on the webhook
- Check "Recent Deliveries" tab
- Look for successful deliveries (green checkmark)
Check webhook secret:
cat .env | grep WEBHOOK_SECRET
# Must match exactly what you entered in GitHub
Verify ngrok is running (local dev):
# Check ngrok status
curl http://localhost:4040/api/tunnels
# Or visit http://localhost:4040 in browser
Check application logs for webhook processing:
docker compose logs -f app | grep GitHub # --profile external-db
docker compose logs -f app-with-db | grep GitHub # --profile with-db
TypeScript Compilation Errors
Clean and rebuild:
# Stop containers (use the profile you started with)
docker compose --profile external-db down # or --profile with-db
# Clean build
rm -rf dist node_modules
bun install
bun run build
# Restart (use the profile you need)
docker compose --profile external-db up -d --build # or --profile with-db
Check for type errors:
bun run type-check
Container Won't Start
Check logs for specific errors:
docker compose logs app # If using --profile external-db
docker compose logs app-with-db # If using --profile with-db
Verify environment variables:
# Check if .env is properly formatted (include your profile)
docker compose --profile external-db config # or --profile with-db
Rebuild without cache:
docker compose --profile external-db build --no-cache # or --profile with-db
docker compose --profile external-db up -d # or --profile with-db
Check port conflicts:
# See if port 3000 is already in use
# Linux/Mac:
lsof -i :3000
# Windows:
netstat -ano | findstr :3000