The first open-source harness builder for AI coding. Make AI coding deterministic and repeatable.
Find a file
Rasmus Widing e24a4a8f4b
Add unified isolation environment architecture (Phase 2.5) (#92)
Introduces work-centric isolation model where orchestrator is the single
source of truth for all isolation decisions. Adapters now pass IsolationHints
instead of managing worktrees directly.

Key changes:
- Add isolation_environments table for workflow-based isolation tracking
- Add validateAndResolveIsolation to orchestrator as single decision point
- Add cleanup-service for unified event-driven cleanup handling
- Simplify GitHub adapter to delegate isolation to orchestrator
- Add IsolationHints type for adapter-to-orchestrator communication
- Support automatic worktree sharing between linked issues and PRs
2025-12-17 15:57:12 +02:00
.agents Add unified isolation environment architecture (Phase 2.5) (#92) 2025-12-17 15:57:12 +02:00
.claude Add multi-lens PR review command with confidence scoring 2025-12-17 15:36:02 +02:00
.github Migrate from Node.js/npm/Jest to Bun runtime (#85) 2025-12-16 15:34:58 +02:00
docs Add unified isolation environment architecture (Phase 2.5) (#92) 2025-12-17 15:57:12 +02:00
migrations Add unified isolation environment architecture (Phase 2.5) (#92) 2025-12-17 15:57:12 +02:00
src Add unified isolation environment architecture (Phase 2.5) (#92) 2025-12-17 15:57:12 +02:00
.dockerignore Fixing up Dockerfile 2025-11-13 17:29:17 -06:00
.env.example Add Slack platform adapter with Socket Mode support (#73) 2025-12-05 21:34:42 +02:00
.gitattributes Initial commit 2025-11-09 10:59:59 -06:00
.gitignore Add unified isolation environment architecture (Phase 2.5) (#92) 2025-12-17 15:57:12 +02:00
.prettierignore Small ESLint/Prettier PIV Loop 2025-11-11 08:59:22 -06:00
.prettierrc Small ESLint/Prettier PIV Loop 2025-11-11 08:59:22 -06:00
.test-pr-worktree Test PR for worktree isolation (#38) 2025-12-03 12:01:33 +02:00
bun.lock Migrate from Node.js/npm/Jest to Bun runtime (#85) 2025-12-16 15:34:58 +02:00
bunfig.toml Migrate from Node.js/npm/Jest to Bun runtime (#85) 2025-12-16 15:34:58 +02:00
Caddyfile.example Final touches to cloud deployment guide 2025-11-15 21:23:49 -06:00
CHANGELOG.md Add release lifecycle commands and initial CHANGELOG.md 2025-12-08 15:30:54 +02:00
CLAUDE.md Migrate from Node.js/npm/Jest to Bun runtime (#85) 2025-12-16 15:34:58 +02:00
CONTRIBUTING.md Migrate from Node.js/npm/Jest to Bun runtime (#85) 2025-12-16 15:34:58 +02:00
docker-compose.cloud.yml Cloud Deployment Guide 2025-11-12 09:51:46 -06:00
docker-compose.yml SQL, documentation, and Docker touch ups 2025-12-05 07:12:12 -06:00
Dockerfile Migrate from Node.js/npm/Jest to Bun runtime (#85) 2025-12-16 15:34:58 +02:00
eslint.config.mjs Add GitHub templates and fix ESLint config 2025-12-03 10:59:48 +02:00
LICENSE README Overhaul 2025-11-12 09:10:34 -06:00
package-lock.json Fix: Status command displays isolation_env_id (#88) (#89) 2025-12-17 12:12:31 +02:00
package.json Migrate from Node.js/npm/Jest to Bun runtime (#85) 2025-12-16 15:34:58 +02:00
README.md Migrate from Node.js/npm/Jest to Bun runtime (#85) 2025-12-16 15:34:58 +02:00
setup-test-codebase.js Small ESLint/Prettier PIV Loop 2025-11-11 08:59:22 -06:00
test-db.js Small ESLint/Prettier PIV Loop 2025-11-11 08:59:22 -06:00
tsconfig.json Migrate from Node.js/npm/Jest to Bun runtime (#85) 2025-12-16 15:34:58 +02:00

Dynamous Remote Coding Agent

Control AI coding assistants (Claude Code, Codex) remotely from Telegram, GitHub, and more. Built for developers who want to code from anywhere with persistent sessions and flexible workflows/systems.

Quick Start: Core ConfigurationAI Assistant SetupPlatform SetupStart the AppUsage Guide

Features

  • Multi-Platform Support: Interact via Telegram, Slack, Discord, GitHub issues/PRs, and more
  • Multiple AI Assistants: Choose between Claude Code or Codex (or both)
  • Persistent Sessions: Sessions survive container restarts with full context preservation
  • Codebase Management: Clone and work with any GitHub repository
  • Flexible Streaming: Real-time or batch message delivery per platform
  • Generic Command System: User-defined commands versioned with Git
  • Docker Ready: Simple deployment with Docker Compose

Prerequisites

System Requirements:

  • Docker & Docker Compose (for deployment)
  • Node.js 20+ (for local development only)

Accounts Required:

  • GitHub account (for repository cloning via /clone command)
  • At least one of: Claude Pro/Max subscription OR Codex account
  • At least one of: Telegram, Slack, Discord, or GitHub account (for interaction)

🌐 Production Deployment: This guide covers local development setup. To deploy remotely for 24/7 operation on a cloud VPS (DigitalOcean, AWS, Linode, etc.), see the Cloud Deployment Guide.


Setup Guide

Get started:

git clone https://github.com/dynamous-community/remote-coding-agent
cd remote-coding-agent

1. Core Configuration (Required)

Create environment file:

cp .env.example .env

Set these required variables:

Variable Purpose How to Get
DATABASE_URL PostgreSQL connection See database options below
GH_TOKEN Repository cloning Generate token with repo scope
GITHUB_TOKEN Same as GH_TOKEN Use same token value
PORT HTTP server port Default: 3000 (optional)
WORKSPACE_PATH Clone destination Recommended: /tmp/remote-agent-workspace or ~/remote-agent-workspace (see note below)

GitHub Personal Access Token Setup:

  1. Visit GitHub Settings > Personal Access Tokens
  2. Click "Generate new token (classic)" → Select scope: repo
  3. Copy token (starts with ghp_...) and set both variables:
# .env
GH_TOKEN=ghp_your_token_here
GITHUB_TOKEN=ghp_your_token_here  # Same value

⚠️ Important: WORKSPACE_PATH Configuration

The WORKSPACE_PATH determines where cloned repositories are stored. Use a path outside your project directory to avoid issues:

# Recommended options
WORKSPACE_PATH=~/remote-agent-workspace (persistent in home directory - Linux/Mac)
# or
WORKSPACE_PATH=C:Users\[your-user-ID]\remote-agent-workspace (Windows)

Docker note: Inside containers, the path is always /workspace (mapped from your host WORKSPACE_PATH in docker-compose.yml).

Database Setup - Choose One:

Option A: Remote PostgreSQL (Supabase, Neon)

Set your remote connection string:

DATABASE_URL=postgresql://user:password@host:5432/dbname

For fresh installations, run the combined migration:

psql $DATABASE_URL < migrations/000_combined.sql

This creates 4 tables:

  • remote_agent_codebases - Repository metadata
  • remote_agent_conversations - Platform conversation tracking
  • remote_agent_sessions - AI session management
  • remote_agent_command_templates - Global command templates

For updates to existing installations, run only the migrations you haven't applied yet:

# Check which migrations you've already run, then apply new ones:
psql $DATABASE_URL < migrations/002_command_templates.sql
psql $DATABASE_URL < migrations/003_add_worktree.sql
psql $DATABASE_URL < migrations/004_worktree_sharing.sql
Option B: Local PostgreSQL (via Docker)

Use the with-db profile for automatic PostgreSQL setup:

DATABASE_URL=postgresql://postgres:postgres@postgres:5432/remote_coding_agent

For fresh installations, database schema is created automatically when you start with docker compose --profile with-db. The combined migration runs on first startup.

For updates to existing Docker installations, you need to manually run new migrations:

# Connect to the running postgres container
docker compose exec postgres psql -U postgres -d remote_coding_agent

# Then run the migrations you haven't applied yet
\i /migrations/002_command_templates.sql
\i /migrations/003_add_worktree.sql
\i /migrations/004_worktree_sharing.sql
\q

Or from your host machine (requires psql installed):

psql postgresql://postgres:postgres@localhost:5432/remote_coding_agent < migrations/002_command_templates.sql
psql postgresql://postgres:postgres@localhost:5432/remote_coding_agent < migrations/003_add_worktree.sql
psql postgresql://postgres:postgres@localhost:5432/remote_coding_agent < migrations/004_worktree_sharing.sql

2. AI Assistant Setup (Choose At Least One)

You must configure at least one AI assistant. Both can be configured if desired.

🤖 Claude Code

Recommended for Claude Pro/Max subscribers.

Get OAuth Token (Preferred Method):

# Install Claude Code CLI first: https://docs.claude.com/claude-code/installation
claude setup-token

# Copy the token starting with sk-ant-oat01-...

Set environment variable:

CLAUDE_CODE_OAUTH_TOKEN=sk-ant-oat01-xxxxx

Alternative: API Key (if you prefer pay-per-use credits):

  1. Visit console.anthropic.com/settings/keys
  2. Create a new key (starts with sk-ant-)
  3. Set environment variable:
CLAUDE_API_KEY=sk-ant-xxxxx

Set as default assistant (optional):

If you want Claude to be the default AI assistant for new conversations without codebase context, set this environment variable:

DEFAULT_AI_ASSISTANT=claude
🤖 Codex

Authenticate with Codex CLI:

# Install Codex CLI first: https://docs.codex.com/installation
codex login

# Follow browser authentication flow

Extract credentials from auth file:

On Linux/Mac:

cat ~/.codex/auth.json

On Windows:

type %USERPROFILE%\.codex\auth.json

Set all four environment variables:

CODEX_ID_TOKEN=eyJhbGc...
CODEX_ACCESS_TOKEN=eyJhbGc...
CODEX_REFRESH_TOKEN=rt_...
CODEX_ACCOUNT_ID=6a6a7ba6-...

Set as default assistant (optional):

If you want Codex to be the default AI assistant for new conversations without codebase context, set this environment variable:

DEFAULT_AI_ASSISTANT=codex

How Assistant Selection Works:

  • Assistant type is set per codebase (auto-detected from .claude/commands/ or .codex/ folders)
  • Once a conversation starts, the assistant type is locked for that conversation
  • DEFAULT_AI_ASSISTANT (optional) is used only for new conversations without codebase context

3. Platform Adapter Setup (Choose At Least One)

You must configure at least one platform to interact with your AI assistant.

💬 Telegram

Create Telegram Bot:

  1. Message @BotFather on Telegram
  2. Send /newbot and follow the prompts
  3. Copy the bot token (format: 123456789:ABCdefGHIjklMNOpqrsTUVwxyz)

Set environment variable:

TELEGRAM_BOT_TOKEN=123456789:ABCdefGHI...

Configure streaming mode (optional):

TELEGRAM_STREAMING_MODE=stream  # stream (default) | batch

For streaming mode details, see Advanced Configuration.

💼 Slack

Create Slack App with Socket Mode:

See the detailed Slack Setup Guide for step-by-step instructions.

Quick Overview:

  1. Create app at api.slack.com/apps
  2. Enable Socket Mode and get App Token (xapp-...)
  3. Add Bot Token Scopes: app_mentions:read, chat:write, channels:history, im:history, im:write
  4. Subscribe to events: app_mention, message.im
  5. Install to workspace and get Bot Token (xoxb-...)

Set environment variables:

SLACK_BOT_TOKEN=xoxb-your-bot-token
SLACK_APP_TOKEN=xapp-your-app-token

Optional configuration:

# Restrict to specific users (comma-separated Slack user IDs)
SLACK_ALLOWED_USER_IDS=U1234ABCD,W5678EFGH

# Streaming mode
SLACK_STREAMING_MODE=batch  # batch (default) | stream

Usage:

Interact by @mentioning your bot in channels or DM directly:

@your-bot /clone https://github.com/user/repo
@your-bot /status

Thread replies maintain conversation context, enabling workflows like:

  1. Clone repo in main channel
  2. Continue work in thread
  3. Use /worktree for parallel development
🐙 GitHub Webhooks

Requirements:

  • GitHub repository with issues enabled
  • GITHUB_TOKEN already set in Core Configuration above
  • Public endpoint for webhooks (see ngrok setup below for local development)

Step 1: Generate Webhook Secret

On Linux/Mac:

openssl rand -hex 32

On Windows (PowerShell):

-join ((1..32) | ForEach-Object { '{0:x2}' -f (Get-Random -Maximum 256) })

Save this secret - you'll need it for steps 3 and 4.

Step 2: Expose Local Server (Development Only)

Using ngrok (Free Tier)
# Install ngrok: https://ngrok.com/download
# Or: choco install ngrok (Windows)
# Or: brew install ngrok (Mac)

# Start tunnel
ngrok http 3000

# Copy the HTTPS URL (e.g., https://abc123.ngrok-free.app)
# ⚠️ Free tier URLs change on restart

Keep this terminal open while testing.

Using Cloudflare Tunnel (Persistent URLs)
# Install: https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/install-and-setup/
cloudflared tunnel --url http://localhost:3000

# Get persistent URL from Cloudflare dashboard

Persistent URLs survive restarts.

For production deployments, use your deployed server URL (no tunnel needed).

Step 3: Configure GitHub Webhook

Go to your repository settings:

  • Navigate to: https://github.com/owner/repo/settings/hooks
  • Click "Add webhook"
  • Note: For multiple repositories, you'll need to add the webhook to each one individually

Webhook Configuration:

Field Value
Payload URL Local: https://abc123.ngrok-free.app/webhooks/github
Production: https://your-domain.com/webhooks/github
Content type application/json
Secret Paste the secret from Step 1
SSL verification Enable SSL verification (recommended)
Events Select "Let me select individual events":
✓ Issues
✓ Issue comments
✓ Pull requests

Click "Add webhook" and verify it shows a green checkmark after delivery.

Step 4: Set Environment Variables

WEBHOOK_SECRET=your_secret_from_step_1

Important: The WEBHOOK_SECRET must match exactly what you entered in GitHub's webhook configuration.

Step 5: Configure Streaming (Optional)

GITHUB_STREAMING_MODE=batch  # batch (default) | stream

For streaming mode details, see Advanced Configuration.

Usage:

Interact by @mentioning @remote-agent in issues or PRs:

@remote-agent can you analyze this bug?
@remote-agent /command-invoke prime
@remote-agent review this implementation

First mention behavior:

  • Automatically clones the repository to /workspace
  • Detects and loads commands from .claude/commands/ or .agents/commands/
  • Injects full issue/PR context for the AI assistant

Subsequent mentions:

  • Resumes existing conversation
  • Maintains full context across comments
💬 Discord

Create Discord Bot:

  1. Visit Discord Developer Portal
  2. Click "New Application" → Enter a name → Click "Create"
  3. Go to the "Bot" tab in the left sidebar
  4. Click "Add Bot" → Confirm

Get Bot Token:

  1. Under the Bot tab, click "Reset Token"
  2. Copy the token (starts with a long alphanumeric string)
  3. Save it securely - you won't be able to see it again

Enable Message Content Intent (Required):

  1. Scroll down to "Privileged Gateway Intents"
  2. Enable "Message Content Intent" (required for the bot to read messages)
  3. Save changes

Invite Bot to Your Server:

  1. Go to "OAuth2" → "URL Generator" in the left sidebar
  2. Under "Scopes", select:
    • bot
  3. Under "Bot Permissions", select:
    • ✓ Send Messages
    • ✓ Read Message History
    • ✓ Create Public Threads (optional, for thread support)
    • ✓ Send Messages in Threads (optional, for thread support)
  4. Copy the generated URL at the bottom
  5. Paste it in your browser and select your server
  6. Click "Authorize"

Note: You need "Manage Server" permission to add bots.

Set environment variable:

DISCORD_BOT_TOKEN=your_bot_token_here

Configure user whitelist (optional):

To restrict bot access to specific users, enable Developer Mode in Discord:

  1. User Settings → Advanced → Enable "Developer Mode"
  2. Right-click on users → "Copy User ID"
  3. Add to environment:
DISCORD_ALLOWED_USER_IDS=123456789012345678,987654321098765432

Configure streaming mode (optional):

DISCORD_STREAMING_MODE=batch  # batch (default) | stream

For streaming mode details, see Advanced Configuration.

Usage:

The bot responds to:

  • Direct Messages: Just send messages directly
  • Server Channels: @mention the bot (e.g., @YourBotName help me with this code)
  • Threads: Bot maintains context in thread conversations

4. Start the Application

Choose the Docker Compose profile based on your database setup:

Option A: With Remote PostgreSQL (Supabase, Neon, etc.)

Starts only the app container (requires DATABASE_URL set to remote database in .env):

# Start app container
docker compose --profile external-db up -d --build

# View logs
docker compose logs -f app

Option B: With Local PostgreSQL (Docker)

Starts both the app and PostgreSQL containers:

# Start containers
docker compose --profile with-db up -d --build

# Wait for startup (watch logs)
docker compose logs -f app-with-db

# Database tables are created automatically via init script

Option C: Local Development (No Docker)

Run directly with Bun (requires local PostgreSQL or remote DATABASE_URL in .env):

bun run dev

Stop the application:

docker compose --profile external-db down  # If using Option A
docker compose --profile with-db down      # If using Option B

Usage

Available Commands

Once your platform adapter is running, you can use these commands:

Command Description Example
/help Show available commands /help
/clone <url> Clone a GitHub repository /clone https://github.com/user/repo
/repos List cloned repositories /repos
/status Show conversation state /status
/getcwd Show current working directory /getcwd
/setcwd <path> Change working directory /setcwd /workspace/repo
/command-set <name> <path> Register a custom command /command-set analyze .claude/commands/analyze.md
/load-commands <folder> Bulk load commands from folder /load-commands .claude/commands
/command-invoke <name> [args] Execute custom command /command-invoke plan "Add dark mode"
/commands List registered commands /commands
/reset Clear active session /reset

Example Workflow (Telegram)

🚀 Initial Setup

You: /clone https://github.com/anthropics/anthropic-sdk-typescript

Bot: ✅ Repository cloned successfully!

     📁 Codebase: anthropic-sdk-typescript
     📂 Path: /workspace/anthropic-sdk-typescript

     🔍 Detected .claude/commands/ folder

You: /load-commands .claude/commands

Bot: ✅ Loaded 5 commands:
     • prime - Research codebase
     • plan - Create implementation plan
     • execute - Implement feature
     • validate - Run validation
     • commit - Create git commit

💬 Asking Questions

You: What files are in this repo?

Bot: 📋 Let me analyze the repository structure for you...

     [Claude streams detailed analysis]

🔧 Working with Commands

You: /command-invoke prime

Bot: 🔍 Starting codebase research...

     [Claude analyzes codebase structure, dependencies, patterns]

You: /command-invoke plan "Add retry logic to API calls"

Bot: 📝 Creating implementation plan...

     [Claude creates detailed plan with steps]

Checking Status

You: /status

Bot: 📊 Conversation Status

     🤖 Platform: telegram
     🧠 AI Assistant: claude

     📦 Codebase: anthropic-sdk-typescript
     🔗 Repository: https://github.com/anthropics/anthropic-sdk-typescript
     📂 Working Directory: /workspace/anthropic-sdk-typescript

     🔄 Active Session: a1b2c3d4...

     📋 Registered Commands:
       • prime - Research codebase
       • plan - Create implementation plan
       • execute - Implement feature
       • validate - Run validation
       • commit - Create git commit

🔄 Reset Session

You: /reset

Bot: ✅ Session cleared. Starting fresh on next message.
     📦 Codebase configuration preserved.

Example Workflow (GitHub)

Create an issue or comment on an existing issue/PR:

@your-bot-name can you help me understand the authentication flow?

Bot responds with analysis. Continue the conversation:

@your-bot-name can you create a sequence diagram for this?

Bot maintains context and provides the diagram.


Advanced Configuration

Streaming Modes Explained

Stream Mode

Messages are sent in real-time as the AI generates responses.

Configuration:

TELEGRAM_STREAMING_MODE=stream
GITHUB_STREAMING_MODE=stream

Pros:

  • Real-time feedback and progress indication
  • More interactive and engaging
  • See AI reasoning as it works

Cons:

  • More API calls to platform
  • May hit rate limits with very long responses
  • Creates many messages/comments

Best for: Interactive chat platforms (Telegram)

Batch Mode

Only the final summary message is sent after AI completes processing.

Configuration:

TELEGRAM_STREAMING_MODE=batch
GITHUB_STREAMING_MODE=batch

Pros:

  • Single coherent message/comment
  • Fewer API calls
  • No spam or clutter

Cons:

  • No progress indication during processing
  • Longer wait for first response
  • Can't see intermediate steps

Best for: Issue trackers and async platforms (GitHub)

Concurrency Settings

Control how many conversations the system processes simultaneously:

MAX_CONCURRENT_CONVERSATIONS=10  # Default: 10

How it works:

  • Conversations are processed with a lock manager
  • If max concurrent limit reached, new messages are queued
  • Prevents resource exhaustion and API rate limits
  • Each conversation maintains its own independent context

Check current load:

curl http://localhost:3000/health/concurrency

Response:

{
  "status": "ok",
  "active": 3,
  "queued": 0,
  "maxConcurrent": 10
}

Tuning guidance:

  • Low resources: Set to 3-5
  • Standard: Default 10 works well
  • High resources: Can increase to 20-30 (monitor API limits)
Health Check Endpoints

The application exposes health check endpoints for monitoring:

Basic Health Check:

curl http://localhost:3000/health

Returns: {"status":"ok"}

Database Connectivity:

curl http://localhost:3000/health/db

Returns: {"status":"ok","database":"connected"}

Concurrency Status:

curl http://localhost:3000/health/concurrency

Returns: {"status":"ok","active":0,"queued":0,"maxConcurrent":10}

Use cases:

  • Docker healthcheck configuration
  • Load balancer health checks
  • Monitoring and alerting systems (Prometheus, Datadog, etc.)
  • CI/CD deployment verification
Custom Command System

Create your own commands by adding markdown files to your codebase:

1. Create command file:

mkdir -p .claude/commands
cat > .claude/commands/analyze.md << 'EOF'
You are an expert code analyzer.

Analyze the following aspect of the codebase: $1

Provide:
1. Current implementation analysis
2. Potential issues or improvements
3. Best practices recommendations

Focus area: $ARGUMENTS
EOF

2. Load commands:

/load-commands .claude/commands

3. Invoke your command:

/command-invoke analyze "security vulnerabilities"

Variable substitution:

  • $1, $2, $3, etc. - Positional arguments
  • $ARGUMENTS - All arguments as a single string
  • $PLAN - Previous plan from session metadata
  • $IMPLEMENTATION_SUMMARY - Previous execution summary

Commands are version-controlled with your codebase, not stored in the database.


Architecture

System Overview

┌─────────────────────────────────────────────────────────┐
│   Platform Adapters (Telegram, Slack, Discord, GitHub) │
└──────────────────────────┬──────────────────────────────┘
                   │
                   ▼
┌─────────────────────────────────────────────┐
│            Orchestrator                     │
│   (Message Routing & Context Management)    │
└──────────────┬──────────────────────────────┘
               │
       ┌───────┴────────┐
       │                │
       ▼                ▼
┌─────────────┐  ┌──────────────────┐
│  Command    │  │  AI Assistant    │
│  Handler    │  │  Clients         │
│  (Slash)    │  │  (Claude/Codex)  │
└─────────────┘  └────────┬─────────┘
       │                  │
       └────────┬─────────┘
                ▼
┌─────────────────────────────────────────────┐
│        PostgreSQL (3 Tables)                │
│  • Codebases  • Conversations  • Sessions   │
└─────────────────────────────────────────────┘

Key Design Patterns

  • Adapter Pattern: Platform-agnostic via IPlatformAdapter interface
  • Strategy Pattern: Swappable AI assistants via IAssistantClient interface
  • Session Persistence: AI context survives restarts via database storage
  • Generic Commands: User-defined markdown commands versioned with Git
  • Concurrency Control: Lock manager prevents race conditions

Database Schema

3 tables with `remote_agent_` prefix
  1. remote_agent_codebases - Repository metadata

    • Commands stored as JSONB: {command_name: {path, description}}
    • AI assistant type per codebase
    • Default working directory
  2. remote_agent_conversations - Platform conversation tracking

    • Platform type + conversation ID (unique constraint)
    • Linked to codebase via foreign key
    • AI assistant type locked at creation
  3. remote_agent_sessions - AI session management

    • Active session flag (one per conversation)
    • Session ID for resume capability
    • Metadata JSONB for command context

Troubleshooting

Bot Not Responding

Check if application is running:

docker compose ps
# Should show 'app' or 'app-with-db' with state 'Up'

Check application logs:

docker compose logs -f app          # If using --profile external-db
docker compose logs -f app-with-db  # If using --profile with-db

Verify bot token:

# In your .env file
cat .env | grep TELEGRAM_BOT_TOKEN

Test with health check:

curl http://localhost:3000/health
# Expected: {"status":"ok"}

Database Connection Errors

Check database health:

curl http://localhost:3000/health/db
# Expected: {"status":"ok","database":"connected"}

For local PostgreSQL (with-db profile):

# Check if postgres container is running
docker compose ps postgres

# Check postgres logs
docker compose logs -f postgres

# Test direct connection
docker compose exec postgres psql -U postgres -c "SELECT 1"

For remote PostgreSQL:

# Verify DATABASE_URL
echo $DATABASE_URL

# Test connection directly
psql $DATABASE_URL -c "SELECT 1"

Verify tables exist:

# For local postgres
docker compose exec postgres psql -U postgres -d remote_coding_agent -c "\dt"

# Should show: remote_agent_codebases, remote_agent_conversations, remote_agent_sessions

Clone Command Fails

Verify GitHub token:

cat .env | grep GH_TOKEN
# Should have both GH_TOKEN and GITHUB_TOKEN set

Test token validity:

# Test GitHub API access
curl -H "Authorization: token $GH_TOKEN" https://api.github.com/user

Check workspace permissions:

# Use the service name matching your profile
docker compose exec app ls -la /workspace          # --profile external-db
docker compose exec app-with-db ls -la /workspace  # --profile with-db

Try manual clone:

docker compose exec app git clone https://github.com/user/repo /workspace/test-repo
# Or app-with-db if using --profile with-db

GitHub Webhook Not Triggering

Verify webhook delivery:

  1. Go to your webhook settings in GitHub
  2. Click on the webhook
  3. Check "Recent Deliveries" tab
  4. Look for successful deliveries (green checkmark)

Check webhook secret:

cat .env | grep WEBHOOK_SECRET
# Must match exactly what you entered in GitHub

Verify ngrok is running (local dev):

# Check ngrok status
curl http://localhost:4040/api/tunnels
# Or visit http://localhost:4040 in browser

Check application logs for webhook processing:

docker compose logs -f app | grep GitHub          # --profile external-db
docker compose logs -f app-with-db | grep GitHub  # --profile with-db

TypeScript Compilation Errors

Clean and rebuild:

# Stop containers (use the profile you started with)
docker compose --profile external-db down  # or --profile with-db

# Clean build
rm -rf dist node_modules
bun install
bun run build

# Restart (use the profile you need)
docker compose --profile external-db up -d --build  # or --profile with-db

Check for type errors:

bun run type-check

Container Won't Start

Check logs for specific errors:

docker compose logs app          # If using --profile external-db
docker compose logs app-with-db  # If using --profile with-db

Verify environment variables:

# Check if .env is properly formatted (include your profile)
docker compose --profile external-db config  # or --profile with-db

Rebuild without cache:

docker compose --profile external-db build --no-cache  # or --profile with-db
docker compose --profile external-db up -d             # or --profile with-db

Check port conflicts:

# See if port 3000 is already in use
# Linux/Mac:
lsof -i :3000

# Windows:
netstat -ano | findstr :3000