feat: add REPL mode, test suite, CI/CD, and Claude Skills (Sprints 6-8)

Sprint 6 - REPL Mode + Polish:
- Error hierarchy (PbiCliError, McpToolError, etc.) for clean REPL error handling
- Interactive REPL with prompt-toolkit (persistent MCP connection, command completion, history)
- REPL-aware run_tool() and connection commands that reuse shared client
- README.md and CHANGELOG.md

Sprint 7 - Tests + CI/CD:
- 120 tests across unit, command, and e2e test files (79% coverage)
- MockPbiMcpClient with canned responses for test isolation
- GitHub Actions CI (lint + typecheck + test matrix: 3 OS x 3 Python)
- GitHub Actions release workflow for PyPI trusted publishing

Sprint 8 - Claude Skills + Installer:
- 5 bundled SKILL.md files (modeling, dax, deployment, security, docs)
- `pbi skills install/list/uninstall` command for Claude Code discovery
- Skills packaged with wheel via setuptools package-data
This commit is contained in:
MinaSaad1 2026-03-26 13:54:24 +02:00
parent 170413cf22
commit 51a23668a7
44 changed files with 2895 additions and 46 deletions

49
.github/workflows/ci.yml vendored Normal file
View file

@ -0,0 +1,49 @@
name: CI
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- run: pip install -e ".[dev]"
- run: ruff check src/ tests/
- run: ruff format --check src/ tests/
typecheck:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- run: pip install -e ".[dev]"
- run: mypy src/
test:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
python-version: ["3.10", "3.12", "3.13"]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- run: pip install -e ".[dev]"
- run: pytest --cov=pbi_cli --cov-report=xml -m "not e2e" -v
- name: Upload coverage
if: matrix.os == 'ubuntu-latest' && matrix.python-version == '3.12'
uses: codecov/codecov-action@v4
with:
file: coverage.xml
continue-on-error: true

20
.github/workflows/release.yml vendored Normal file
View file

@ -0,0 +1,20 @@
name: Release
on:
push:
tags: ["v*"]
jobs:
publish:
runs-on: ubuntu-latest
permissions:
id-token: write
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- run: pip install build
- run: python -m build
- name: Publish to PyPI
uses: pypa/gh-action-pypi-publish@release/v1

38
CHANGELOG.md Normal file
View file

@ -0,0 +1,38 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.2.0] - 2026-03-26
### Added
- Interactive REPL mode (`pbi repl`) with persistent MCP connection
- Tab completion and command history in REPL
- Dynamic prompt showing active connection name
- Error hierarchy (`PbiCliError`, `McpToolError`, `BinaryNotFoundError`, `ConnectionRequiredError`)
### Changed
- REPL mode reuses shared MCP client instead of spawning per command
- Connection commands (`connect`, `connect-fabric`, `disconnect`) are REPL-aware
## [0.1.0] - 2026-03-26
### Added
- Initial release with 22 command groups covering all Power BI MCP tool operations
- Binary manager: download Power BI MCP binary from VS Code Marketplace
- Connection management with named connections and persistence
- DAX query execution, validation, and cache clearing
- Full CRUD for measures, tables, columns, relationships
- Model metadata, statistics, and refresh operations
- Database import/export (TMDL and TMSL formats)
- Security role management (row-level security)
- Calculation groups, partitions, perspectives, hierarchies
- Named expressions, calendar tables, diagnostic traces
- Transaction management (begin/commit/rollback)
- Advanced operations: cultures, translations, functions, query groups
- Dual output mode: `--json` for agents, Rich tables for humans
- Named connection support with `--connection` / `-c` flag
- Binary resolution chain: env var, managed binary, VS Code extension fallback
- Cross-platform support: Windows, macOS, Linux (x64 and ARM64)

206
README.md Normal file
View file

@ -0,0 +1,206 @@
# pbi-cli
**Token-efficient CLI for Power BI semantic models.**
pbi-cli wraps Microsoft's Power BI MCP server so you can manage semantic models from the terminal. MCP tool schemas consume ~4,000+ tokens in an AI agent's context window; a `pbi` command uses ~30. One install, no separate MCP server configuration required.
```
pip install pbi-cli
pbi setup
pbi connect --data-source localhost:54321
pbi measure list
```
## Why pbi-cli?
| Approach | Context cost | Setup |
|----------|-------------|-------|
| Raw MCP server | ~4,000 tokens per tool schema | Manual config per project |
| **pbi-cli** | **~30 tokens per command** | **`pip install pbi-cli`** |
Designed for Claude Code and other AI agents, but works great for humans too. Use `--json` for machine-readable output or enjoy Rich-formatted tables by default.
## Installation
```bash
pip install pbi-cli
```
### Prerequisites
- Python 3.10+
- Power BI Desktop (for local development) or a Fabric workspace
### First-time setup
Download the Power BI MCP binary:
```bash
pbi setup
```
This downloads the official Microsoft binary from the VS Code Marketplace to `~/.pbi-cli/bin/`. You can also point to an existing binary:
```bash
export PBI_MCP_BINARY=/path/to/powerbi-modeling-mcp
```
## Quick Start
### Connect to Power BI Desktop
```bash
# Connect to a local Power BI Desktop instance
pbi connect --data-source localhost:54321
# Connect to a Fabric workspace model
pbi connect-fabric --workspace "My Workspace" --model "Sales Model"
```
### Run DAX queries
```bash
pbi dax execute "EVALUATE TOPN(10, Sales)"
pbi dax execute --file query.dax
cat query.dax | pbi dax execute -
```
### Manage measures
```bash
pbi measure list
pbi measure create "Total Revenue" --expression "SUM(Sales[Revenue])" --table Sales
pbi measure get "Total Revenue" --table Sales
```
### Export and import models
```bash
pbi database export-tmdl ./my-model/
pbi database import-tmdl ./my-model/
```
## Command Reference
| Group | Description | Examples |
|-------|-------------|---------|
| `setup` | Download and manage the MCP binary | `pbi setup`, `pbi setup --check` |
| `connect` | Connect to Power BI via data source | `pbi connect -d localhost:54321` |
| `connect-fabric` | Connect to Fabric workspace | `pbi connect-fabric -w "WS" -m "Model"` |
| `disconnect` | Disconnect from active connection | `pbi disconnect` |
| `connections` | Manage saved connections | `pbi connections list` |
| `dax` | Execute and validate DAX queries | `pbi dax execute "EVALUATE Sales"` |
| `measure` | CRUD for measures | `pbi measure list`, `pbi measure create` |
| `table` | CRUD for tables | `pbi table list`, `pbi table get Sales` |
| `column` | CRUD for columns | `pbi column list --table Sales` |
| `relationship` | Manage relationships | `pbi relationship list` |
| `model` | Model metadata and refresh | `pbi model get`, `pbi model refresh` |
| `database` | Import/export TMDL and TMSL | `pbi database export-tmdl ./out/` |
| `security-role` | Row-level security roles | `pbi security-role list` |
| `calc-group` | Calculation groups and items | `pbi calc-group list` |
| `partition` | Table partitions | `pbi partition list --table Sales` |
| `perspective` | Model perspectives | `pbi perspective list` |
| `hierarchy` | User hierarchies | `pbi hierarchy list --table Date` |
| `expression` | Named expressions | `pbi expression list` |
| `calendar` | Calendar table management | `pbi calendar list` |
| `trace` | Diagnostic traces | `pbi trace start` |
| `transaction` | Explicit transactions | `pbi transaction begin` |
| `advanced` | Cultures, translations, functions | `pbi advanced culture list` |
| `repl` | Interactive REPL session | `pbi repl` |
Run `pbi <command> --help` for full option details.
## REPL Mode
The interactive REPL keeps the MCP server process alive across commands, avoiding the 2-3 second startup cost on each invocation:
```
$ pbi repl
pbi-cli interactive mode. Type 'exit' or Ctrl+D to quit.
pbi> connect --data-source localhost:54321
Connected: localhost-54321 (localhost:54321)
pbi(localhost-54321)> measure list
...
pbi(localhost-54321)> dax execute "EVALUATE Sales"
...
pbi(localhost-54321)> exit
Goodbye.
```
Features:
- Persistent MCP server connection (no restart between commands)
- Command history (stored at `~/.pbi-cli/repl_history`)
- Tab completion for commands and subcommands
- Dynamic prompt showing active connection name
## For AI Agents
Use `--json` before the subcommand for machine-readable JSON output:
```bash
pbi --json measure list
pbi --json dax execute "EVALUATE Sales"
pbi --json model get
```
JSON output goes to stdout. Status messages go to stderr. This makes piping and parsing straightforward.
### Named connections
Use `-c` to target a specific named connection:
```bash
pbi -c my-conn measure list
pbi -c prod-model dax execute "EVALUATE Sales"
```
## Configuration
pbi-cli stores its configuration in `~/.pbi-cli/`:
```
~/.pbi-cli/
config.json # Binary version, path, args
connections.json # Named connections
repl_history # REPL command history
bin/
{version}/
powerbi-modeling-mcp[.exe]
```
### Binary resolution order
1. `PBI_MCP_BINARY` environment variable (explicit override)
2. `~/.pbi-cli/bin/{version}/` (managed by `pbi setup`)
3. VS Code extension fallback (`~/.vscode/extensions/analysis-services.powerbi-modeling-mcp-*/server/`)
## Development
```bash
git clone https://github.com/pbi-cli/pbi-cli.git
cd pbi-cli
pip install -e ".[dev]"
# Lint
ruff check src/ tests/
# Type check
mypy src/
# Test
pytest -m "not e2e"
```
## Contributing
Contributions are welcome! Please open an issue first to discuss what you would like to change.
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/my-change`)
3. Make your changes with tests
4. Run `ruff check` and `mypy` before submitting
5. Open a pull request
## License
[MIT](LICENSE)

View file

@ -55,6 +55,9 @@ dev = [
[tool.setuptools.packages.find]
where = ["src"]
[tool.setuptools.package-data]
"pbi_cli.skills" = ["**/*.md"]
[tool.ruff]
target-version = "py310"
line-length = 100

View file

@ -4,8 +4,7 @@ from __future__ import annotations
from typing import Any
import click
from pbi_cli.core.errors import McpToolError
from pbi_cli.core.mcp_client import get_client
from pbi_cli.core.output import format_mcp_result, print_error
from pbi_cli.main import PbiContext
@ -20,20 +19,23 @@ def run_tool(
Adds connectionName from context if available. Formats output based
on --json flag. Returns the result or exits on error.
In REPL mode the shared client is reused and never stopped.
"""
if ctx.connection:
request.setdefault("connectionName", ctx.connection)
client = get_client()
client = get_client(repl_mode=ctx.repl_mode)
try:
result = client.call_tool(tool_name, request)
format_mcp_result(result, ctx.json_output)
return result
except Exception as e:
print_error(str(e))
raise SystemExit(1)
raise McpToolError(tool_name, str(e))
finally:
client.stop()
if not ctx.repl_mode:
client.stop()
def build_definition(

View file

@ -12,9 +12,8 @@ from pbi_cli.core.connection_store import (
remove_connection,
save_connections,
)
from pbi_cli.core.mcp_client import PbiMcpClient, get_client
from pbi_cli.core.mcp_client import get_client
from pbi_cli.core.output import (
format_mcp_result,
print_error,
print_json,
print_success,
@ -43,7 +42,8 @@ def connect(ctx: PbiContext, data_source: str, catalog: str, name: str | None, c
if connection_string:
request["connectionString"] = connection_string
client = get_client()
repl = ctx.repl_mode
client = get_client(repl_mode=repl)
try:
result = client.call_tool("connection_operations", request)
@ -65,7 +65,8 @@ def connect(ctx: PbiContext, data_source: str, catalog: str, name: str | None, c
print_error(f"Connection failed: {e}")
raise SystemExit(1)
finally:
client.stop()
if not repl:
client.stop()
@click.command(name="connect-fabric")
@ -86,7 +87,8 @@ def connect_fabric(ctx: PbiContext, workspace: str, model: str, name: str | None
"tenantName": tenant,
}
client = get_client()
repl = ctx.repl_mode
client = get_client(repl_mode=repl)
try:
result = client.call_tool("connection_operations", request)
@ -109,7 +111,8 @@ def connect_fabric(ctx: PbiContext, workspace: str, model: str, name: str | None
print_error(f"Fabric connection failed: {e}")
raise SystemExit(1)
finally:
client.stop()
if not repl:
client.stop()
@click.command()
@ -124,7 +127,8 @@ def disconnect(ctx: PbiContext, name: str | None) -> None:
print_error("No active connection to disconnect.")
raise SystemExit(1)
client = get_client()
repl = ctx.repl_mode
client = get_client(repl_mode=repl)
try:
result = client.call_tool("connection_operations", {
"operation": "Disconnect",
@ -142,7 +146,8 @@ def disconnect(ctx: PbiContext, name: str | None) -> None:
print_error(f"Disconnect failed: {e}")
raise SystemExit(1)
finally:
client.stop()
if not repl:
client.stop()
@click.group()

View file

@ -7,7 +7,7 @@ import sys
import click
from pbi_cli.core.mcp_client import get_client
from pbi_cli.core.output import format_mcp_result, print_error, print_json
from pbi_cli.core.output import format_mcp_result, print_error
from pbi_cli.main import PbiContext, pass_context

View file

@ -0,0 +1,21 @@
"""REPL command -- starts an interactive pbi-cli session."""
from __future__ import annotations
import click
from pbi_cli.main import PbiContext, pass_context
@click.command()
@pass_context
def repl(ctx: PbiContext) -> None:
"""Start an interactive REPL session.
Keeps the MCP server process alive across commands, avoiding the
2-3 second startup cost on each invocation. Type 'exit' or press
Ctrl+D to quit.
"""
from pbi_cli.utils.repl import start_repl
start_repl(json_output=ctx.json_output, connection=ctx.connection)

View file

@ -8,7 +8,6 @@ from pbi_cli.core.binary_manager import (
check_for_updates,
download_and_extract,
get_binary_info,
resolve_binary,
)
from pbi_cli.core.output import print_error, print_info, print_json, print_key_value, print_success
from pbi_cli.main import PbiContext, pass_context

View file

@ -0,0 +1,116 @@
"""Skill installer commands for Claude Code integration."""
from __future__ import annotations
import importlib.resources
import shutil
from pathlib import Path
import click
from pbi_cli.main import pass_context
SKILLS_TARGET_DIR = Path.home() / ".claude" / "skills"
def _get_bundled_skills() -> dict[str, importlib.resources.abc.Traversable]:
"""Return a mapping of skill-name -> Traversable for each bundled skill."""
skills_pkg = importlib.resources.files("pbi_cli.skills")
result: dict[str, importlib.resources.abc.Traversable] = {}
for item in skills_pkg.iterdir():
if item.is_dir() and (item / "SKILL.md").is_file():
result[item.name] = item
return result
def _is_installed(skill_name: str) -> bool:
"""Check if a skill is already installed in ~/.claude/skills/."""
return (SKILLS_TARGET_DIR / skill_name / "SKILL.md").exists()
@click.group("skills")
def skills() -> None:
"""Manage Claude Code skills for Power BI workflows."""
@skills.command("list")
@pass_context
def skills_list(ctx: object) -> None:
"""List available and installed skills."""
bundled = _get_bundled_skills()
if not bundled:
click.echo("No bundled skills found.", err=True)
return
click.echo("Available Power BI skills:\n", err=True)
for name in sorted(bundled):
status = "installed" if _is_installed(name) else "not installed"
click.echo(f" {name:<30} [{status}]", err=True)
click.echo(
f"\nTarget directory: {SKILLS_TARGET_DIR}",
err=True,
)
@skills.command("install")
@click.option("--skill", "skill_name", default=None, help="Install a specific skill.")
@click.option("--force", is_flag=True, default=False, help="Overwrite existing installations.")
@pass_context
def skills_install(ctx: object, skill_name: str | None, force: bool) -> None:
"""Install skills to ~/.claude/skills/ for Claude Code discovery."""
bundled = _get_bundled_skills()
if not bundled:
click.echo("No bundled skills found.", err=True)
return
to_install = (
{skill_name: bundled[skill_name]}
if skill_name and skill_name in bundled
else bundled
)
if skill_name and skill_name not in bundled:
raise click.ClickException(
f"Unknown skill '{skill_name}'. "
f"Available: {', '.join(sorted(bundled))}"
)
installed_count = 0
for name, source in sorted(to_install.items()):
target_dir = SKILLS_TARGET_DIR / name
if target_dir.exists() and not force:
click.echo(f" {name}: already installed (use --force to overwrite)", err=True)
continue
target_dir.mkdir(parents=True, exist_ok=True)
source_file = source / "SKILL.md"
target_file = target_dir / "SKILL.md"
# Read from importlib resource and write to target
target_file.write_text(source_file.read_text(encoding="utf-8"), encoding="utf-8")
installed_count += 1
click.echo(f" {name}: installed", err=True)
click.echo(f"\n{installed_count} skill(s) installed to {SKILLS_TARGET_DIR}", err=True)
@skills.command("uninstall")
@click.option("--skill", "skill_name", default=None, help="Uninstall a specific skill.")
@pass_context
def skills_uninstall(ctx: object, skill_name: str | None) -> None:
"""Remove installed skills from ~/.claude/skills/."""
bundled = _get_bundled_skills()
names = [skill_name] if skill_name else sorted(bundled)
removed_count = 0
for name in names:
target_dir = SKILLS_TARGET_DIR / name
if not target_dir.exists():
click.echo(f" {name}: not installed", err=True)
continue
shutil.rmtree(target_dir)
removed_count += 1
click.echo(f" {name}: removed", err=True)
click.echo(f"\n{removed_count} skill(s) removed.", err=True)

View file

@ -12,12 +12,11 @@ import shutil
import tempfile
import zipfile
from pathlib import Path
from typing import Any
import httpx
from pbi_cli.core.config import PBI_CLI_HOME, PbiConfig, ensure_home_dir, load_config, save_config
from pbi_cli.core.output import print_error, print_info, print_success, print_warning
from pbi_cli.core.config import PBI_CLI_HOME, ensure_home_dir, load_config, save_config
from pbi_cli.core.output import print_info, print_success
from pbi_cli.utils.platform import (
binary_name,
detect_platform,
@ -25,7 +24,6 @@ from pbi_cli.utils.platform import (
find_vscode_extension_binary,
)
EXTENSION_ID = "analysis-services.powerbi-modeling-mcp"
PUBLISHER = "analysis-services"
EXTENSION_NAME = "powerbi-modeling-mcp"

View file

@ -9,7 +9,6 @@ import json
from dataclasses import asdict, dataclass, field
from pathlib import Path
PBI_CLI_HOME = Path.home() / ".pbi-cli"
CONFIG_FILE = PBI_CLI_HOME / "config.json"

View file

@ -4,11 +4,9 @@ from __future__ import annotations
import json
from dataclasses import asdict, dataclass
from pathlib import Path
from pbi_cli.core.config import PBI_CLI_HOME, ensure_home_dir
CONNECTIONS_FILE = PBI_CLI_HOME / "connections.json"

View file

@ -0,0 +1,42 @@
"""User-facing error types for pbi-cli.
These exceptions integrate with Click's error formatting so that
errors display cleanly in both normal and REPL modes.
"""
from __future__ import annotations
import click
class PbiCliError(click.ClickException):
"""Base error for all pbi-cli user-facing failures."""
def __init__(self, message: str) -> None:
super().__init__(message)
class BinaryNotFoundError(PbiCliError):
"""Raised when the MCP server binary cannot be resolved."""
def __init__(
self,
message: str = "Power BI MCP binary not found. Run 'pbi setup' first.",
) -> None:
super().__init__(message)
class ConnectionRequiredError(PbiCliError):
"""Raised when a command requires an active connection but none exists."""
def __init__(self, message: str = "No active connection. Run 'pbi connect' first.") -> None:
super().__init__(message)
class McpToolError(PbiCliError):
"""Raised when an MCP tool call fails."""
def __init__(self, tool_name: str, detail: str) -> None:
self.tool_name = tool_name
self.detail = detail
super().__init__(f"{tool_name}: {detail}")

View file

@ -9,8 +9,6 @@ from __future__ import annotations
import asyncio
import atexit
import sys
from contextlib import asynccontextmanager
from pathlib import Path
from typing import Any

View file

@ -3,14 +3,12 @@
from __future__ import annotations
import json
import sys
from typing import Any
from rich.console import Console
from rich.panel import Panel
from rich.table import Table
console = Console()
error_console = Console(stderr=True)

View file

@ -2,9 +2,6 @@
from __future__ import annotations
import sys
from typing import Any
import click
from pbi_cli import __version__
@ -13,9 +10,15 @@ from pbi_cli import __version__
class PbiContext:
"""Shared context passed to all CLI commands."""
def __init__(self, json_output: bool = False, connection: str | None = None) -> None:
def __init__(
self,
json_output: bool = False,
connection: str | None = None,
repl_mode: bool = False,
) -> None:
self.json_output = json_output
self.connection = connection
self.repl_mode = repl_mode
pass_context = click.make_pass_decorator(PbiContext, ensure=True)
@ -40,25 +43,27 @@ def cli(ctx: click.Context, json_output: bool, connection: str | None) -> None:
def _register_commands() -> None:
"""Lazily import and register all command groups."""
from pbi_cli.commands.setup_cmd import setup
from pbi_cli.commands.connection import connect, connect_fabric, disconnect, connections
from pbi_cli.commands.dax import dax
from pbi_cli.commands.measure import measure
from pbi_cli.commands.table import table
from pbi_cli.commands.column import column
from pbi_cli.commands.relationship import relationship
from pbi_cli.commands.model import model
from pbi_cli.commands.database import database
from pbi_cli.commands.security import security_role
from pbi_cli.commands.advanced import advanced
from pbi_cli.commands.calc_group import calc_group
from pbi_cli.commands.calendar import calendar
from pbi_cli.commands.column import column
from pbi_cli.commands.connection import connect, connect_fabric, connections, disconnect
from pbi_cli.commands.database import database
from pbi_cli.commands.dax import dax
from pbi_cli.commands.expression import expression
from pbi_cli.commands.hierarchy import hierarchy
from pbi_cli.commands.measure import measure
from pbi_cli.commands.model import model
from pbi_cli.commands.partition import partition
from pbi_cli.commands.perspective import perspective
from pbi_cli.commands.hierarchy import hierarchy
from pbi_cli.commands.expression import expression
from pbi_cli.commands.calendar import calendar
from pbi_cli.commands.relationship import relationship
from pbi_cli.commands.repl_cmd import repl
from pbi_cli.commands.security import security_role
from pbi_cli.commands.setup_cmd import setup
from pbi_cli.commands.skills_cmd import skills
from pbi_cli.commands.table import table
from pbi_cli.commands.trace import trace
from pbi_cli.commands.transaction import transaction
from pbi_cli.commands.advanced import advanced
cli.add_command(setup)
cli.add_command(connect)
@ -82,6 +87,8 @@ def _register_commands() -> None:
cli.add_command(trace)
cli.add_command(transaction)
cli.add_command(advanced)
cli.add_command(repl)
cli.add_command(skills)
_register_commands()

View file

@ -0,0 +1 @@
"""Bundled Claude Skills for Power BI workflows."""

View file

@ -0,0 +1,172 @@
---
name: Power BI DAX
description: Write, execute, and optimize DAX queries and measures for Power BI semantic models. Use when the user mentions DAX, Power BI calculations, querying data, or wants to analyze data in a semantic model.
tools: pbi-cli
---
# Power BI DAX Skill
Execute and validate DAX queries against connected Power BI models.
## Prerequisites
```bash
pip install pbi-cli
pbi setup
pbi connect --data-source localhost:54321
```
## Executing Queries
```bash
# Inline query
pbi dax execute "EVALUATE TOPN(10, Sales)"
# From file
pbi dax execute --file query.dax
# From stdin (piping)
cat query.dax | pbi dax execute -
echo "EVALUATE Sales" | pbi dax execute -
# With options
pbi dax execute "EVALUATE Sales" --max-rows 100
pbi dax execute "EVALUATE Sales" --metrics # Include execution metrics
pbi dax execute "EVALUATE Sales" --metrics-only # Metrics without data
pbi dax execute "EVALUATE Sales" --timeout 300 # Custom timeout (seconds)
# JSON output for scripting
pbi --json dax execute "EVALUATE Sales"
```
## Validating Queries
```bash
pbi dax validate "EVALUATE Sales"
pbi dax validate --file query.dax
```
## Cache Management
```bash
pbi dax clear-cache # Clear the formula engine cache
```
## Creating Measures with DAX
```bash
# Simple aggregation
pbi measure create "Total Sales" -e "SUM(Sales[Amount])" -t Sales
# Time intelligence
pbi measure create "YTD Sales" -e "TOTALYTD(SUM(Sales[Amount]), Calendar[Date])" -t Sales
# Previous year comparison
pbi measure create "PY Sales" -e "CALCULATE([Total Sales], SAMEPERIODLASTYEAR(Calendar[Date]))" -t Sales
# Year-over-year change
pbi measure create "YoY %" -e "DIVIDE([Total Sales] - [PY Sales], [PY Sales])" -t Sales --format-string "0.0%"
```
## Common DAX Patterns
### Explore Model Data
```bash
# List all tables
pbi dax execute "EVALUATE INFO.TABLES()"
# List columns in a table
pbi dax execute "EVALUATE INFO.COLUMNS()"
# Preview table data
pbi dax execute "EVALUATE TOPN(10, Sales)"
# Count rows
pbi dax execute "EVALUATE ROW(\"Count\", COUNTROWS(Sales))"
```
### Aggregations
```bash
# Basic sum
pbi dax execute "EVALUATE ROW(\"Total\", SUM(Sales[Amount]))"
# Group by with aggregation
pbi dax execute "EVALUATE SUMMARIZECOLUMNS(Products[Category], \"Total\", SUM(Sales[Amount]))"
# Multiple aggregations
pbi dax execute "
EVALUATE
SUMMARIZECOLUMNS(
Products[Category],
\"Total Sales\", SUM(Sales[Amount]),
\"Avg Price\", AVERAGE(Sales[UnitPrice]),
\"Count\", COUNTROWS(Sales)
)
"
```
### Filtering
```bash
# CALCULATE with filter
pbi dax execute "
EVALUATE
ROW(\"Online Sales\", CALCULATE(SUM(Sales[Amount]), Sales[Channel] = \"Online\"))
"
# FILTER with complex condition
pbi dax execute "
EVALUATE
FILTER(
SUMMARIZECOLUMNS(Products[Name], \"Total\", SUM(Sales[Amount])),
[Total] > 1000
)
"
```
### Time Intelligence
```bash
# Year-to-date
pbi dax execute "
EVALUATE
ROW(\"YTD\", TOTALYTD(SUM(Sales[Amount]), Calendar[Date]))
"
# Rolling 12 months
pbi dax execute "
EVALUATE
ROW(\"R12\", CALCULATE(
SUM(Sales[Amount]),
DATESINPERIOD(Calendar[Date], MAX(Calendar[Date]), -12, MONTH)
))
"
```
### Ranking
```bash
# Top products by sales
pbi dax execute "
EVALUATE
TOPN(
10,
ADDCOLUMNS(
VALUES(Products[Name]),
\"Total\", CALCULATE(SUM(Sales[Amount]))
),
[Total], DESC
)
"
```
## Performance Tips
- Use `--metrics` to identify slow queries
- Use `--max-rows` to limit result sets during development
- Run `pbi dax clear-cache` before benchmarking
- Prefer `SUMMARIZECOLUMNS` over `SUMMARIZE` for grouping
- Use `CALCULATE` with simple filters instead of nested `FILTER`
- Avoid iterators (`SUMX`, `FILTER`) on large tables when aggregations suffice

View file

@ -0,0 +1,152 @@
---
name: Power BI Deployment
description: Deploy Power BI semantic models to Fabric workspaces, import and export TMDL and TMSL formats, and manage model lifecycle. Use when the user mentions deploying, publishing, migrating, or version-controlling Power BI models.
tools: pbi-cli
---
# Power BI Deployment Skill
Manage model lifecycle with TMDL export/import and Fabric workspace deployment.
## Prerequisites
```bash
pip install pbi-cli
pbi setup
```
## Connecting to Targets
```bash
# Local Power BI Desktop
pbi connect --data-source localhost:54321
# Fabric workspace (cloud)
pbi connect-fabric --workspace "Production" --model "Sales Model"
# Named connections for switching
pbi connect --data-source localhost:54321 --name dev
pbi connect-fabric --workspace "Production" --model "Sales" --name prod
pbi connections list
```
## TMDL Export and Import
TMDL (Tabular Model Definition Language) is the text-based format for version-controlling Power BI models.
```bash
# Export entire model to TMDL folder
pbi database export-tmdl ./model-tmdl/
# Import TMDL folder into connected model
pbi database import-tmdl ./model-tmdl/
# Export individual objects
pbi model export-tmdl # Full model definition
pbi table export-tmdl Sales # Single table
pbi measure export-tmdl "Total Revenue" -t Sales # Single measure
pbi relationship export-tmdl RelName # Single relationship
pbi security-role export-tmdl "Readers" # Security role
```
## TMSL Export
```bash
# Export as TMSL JSON (for SSAS/AAS compatibility)
pbi database export-tmsl
```
## Database Operations
```bash
# List databases on the connected server
pbi database list
```
## Transaction Management
Use transactions for atomic multi-step changes:
```bash
# Begin a transaction
pbi transaction begin
# Make changes
pbi measure create "New KPI" -e "SUM(Sales[Amount])" -t Sales
pbi measure create "Another KPI" -e "COUNT(Sales[OrderID])" -t Sales
# Commit all changes atomically
pbi transaction commit
# Or rollback if something went wrong
pbi transaction rollback
```
## Model Refresh
```bash
# Refresh entire model
pbi model refresh # Automatic (default)
pbi model refresh --type Full # Full refresh
pbi model refresh --type Calculate # Recalculate only
pbi model refresh --type DataOnly # Data only, no recalc
pbi model refresh --type Defragment # Defragment storage
# Refresh individual tables
pbi table refresh Sales --type Full
```
## Workflow: Version Control with Git
```bash
# 1. Export model to TMDL
pbi database export-tmdl ./model/
# 2. Commit to git
cd model/
git add .
git commit -m "feat: add new revenue measures"
# 3. Deploy to another environment
pbi connect-fabric --workspace "Staging" --model "Sales Model"
pbi database import-tmdl ./model/
```
## Workflow: Promote Dev to Production
```bash
# 1. Connect to dev and export
pbi connect --data-source localhost:54321 --name dev
pbi database export-tmdl ./staging-model/
# 2. Connect to production and import
pbi connect-fabric --workspace "Production" --model "Sales" --name prod
pbi database import-tmdl ./staging-model/
# 3. Refresh production data
pbi model refresh --type Full
```
## Workflow: Inspect Model Before Deploy
```bash
# Get model metadata
pbi --json model get
# Check model statistics
pbi --json model stats
# List all objects
pbi --json table list
pbi --json measure list
pbi --json relationship list
```
## Best Practices
- Always export TMDL before making changes (backup)
- Use transactions for multi-object changes
- Test changes in dev before deploying to production
- Use `--json` for scripted deployments
- Store TMDL in git for version history
- Use named connections (`--name`) to avoid accidental deployments to wrong environment

View file

@ -0,0 +1,148 @@
---
name: Power BI Documentation
description: Auto-document Power BI semantic models by extracting metadata, generating comprehensive documentation, and cataloging all model objects. Use when the user wants to document a Power BI model, create a data dictionary, or audit model contents.
tools: pbi-cli
---
# Power BI Documentation Skill
Generate comprehensive documentation for Power BI semantic models.
## Prerequisites
```bash
pip install pbi-cli
pbi setup
pbi connect --data-source localhost:54321
```
## Quick Model Overview
```bash
pbi --json model get # Model metadata
pbi --json model stats # Table/measure/column counts
```
## Catalog All Objects
```bash
# Tables and their structure
pbi --json table list
pbi --json table get Sales
pbi --json table schema Sales
# All measures
pbi --json measure list
# Individual measure details
pbi --json measure get "Total Revenue" --table Sales
# Columns per table
pbi --json column list --table Sales
pbi --json column list --table Products
# Relationships
pbi --json relationship list
# Security roles
pbi --json security-role list
# Hierarchies
pbi --json hierarchy list --table Date
# Calculation groups
pbi --json calc-group list
# Perspectives
pbi --json perspective list
# Named expressions
pbi --json expression list
```
## Export Full Model as TMDL
```bash
pbi database export-tmdl ./model-docs/
```
This creates a human-readable text representation of the entire model.
## Workflow: Generate Model Documentation
Run these commands to gather all information needed for documentation:
```bash
# Step 1: Model overview
pbi --json model get > model-meta.json
pbi --json model stats > model-stats.json
# Step 2: All tables
pbi --json table list > tables.json
# Step 3: All measures
pbi --json measure list > measures.json
# Step 4: All relationships
pbi --json relationship list > relationships.json
# Step 5: Security roles
pbi --json security-role list > security-roles.json
# Step 6: Column details per table (loop through tables)
pbi --json column list --table Sales > columns-sales.json
pbi --json column list --table Products > columns-products.json
# Step 7: Full TMDL export
pbi database export-tmdl ./tmdl-export/
```
Then assemble these JSON files into markdown or HTML documentation.
## Workflow: Data Dictionary
For each table, extract columns and their types:
```bash
# Get schema for key tables
pbi --json table schema Sales
pbi --json table schema Products
pbi --json table schema Calendar
```
## Workflow: Measure Catalog
Create a complete measure inventory:
```bash
# List all measures with expressions
pbi --json measure list
# Export individual measure definitions as TMDL
pbi measure export-tmdl "Total Revenue" --table Sales
pbi measure export-tmdl "YTD Revenue" --table Sales
```
## Translation and Culture Management
For multi-language documentation:
```bash
# List cultures/translations
pbi --json advanced culture list
pbi --json advanced translation list
# Create culture for localization
pbi advanced culture create "fr-FR"
pbi advanced translation create --culture "fr-FR" --object "Total Sales" --translation "Ventes Totales"
```
## Best Practices
- Always use `--json` flag for machine-readable output
- Export TMDL alongside JSON for complete documentation
- Run documentation generation as part of CI/CD pipeline
- Keep documentation in version control alongside TMDL exports
- Include relationship diagrams (generate from `pbi --json relationship list`)
- Document measure business logic, not just DAX expressions
- Tag measures by business domain using display folders

View file

@ -0,0 +1,128 @@
---
name: Power BI Modeling
description: Create and manage Power BI semantic model objects including tables, columns, measures, relationships, hierarchies, and calculation groups. Use when the user mentions Power BI modeling, semantic models, or wants to create or modify model objects.
tools: pbi-cli
---
# Power BI Modeling Skill
Use pbi-cli to manage semantic model structure. Requires `pip install pbi-cli` and `pbi setup`.
## Prerequisites
```bash
pip install pbi-cli
pbi setup
pbi connect --data-source localhost:54321
```
## Tables
```bash
pbi table list # List all tables
pbi table get Sales # Get table details
pbi table create Sales --mode Import # Create table
pbi table delete OldTable # Delete table
pbi table rename OldName NewName # Rename table
pbi table refresh Sales --type Full # Refresh table data
pbi table schema Sales # Get table schema
pbi table mark-date Calendar --date-column Date # Mark as date table
pbi table export-tmdl Sales # Export as TMDL
```
## Columns
```bash
pbi column list --table Sales # List columns
pbi column get Amount --table Sales # Get column details
pbi column create Revenue --table Sales --data-type double --source-column Revenue # Data column
pbi column create Profit --table Sales --expression "[Revenue]-[Cost]" # Calculated
pbi column delete OldCol --table Sales # Delete column
pbi column rename OldName NewName --table Sales # Rename column
```
## Measures
```bash
pbi measure list # List all measures
pbi measure list --table Sales # Filter by table
pbi measure get "Total Revenue" --table Sales # Get details
pbi measure create "Total Revenue" -e "SUM(Sales[Revenue])" -t Sales # Basic
pbi measure create "Revenue $" -e "SUM(Sales[Revenue])" -t Sales --format-string "\$#,##0" # Formatted
pbi measure create "KPI" -e "..." -t Sales --folder "Key Measures" --description "Main KPI" # With metadata
pbi measure update "Total Revenue" -t Sales -e "SUMX(Sales, Sales[Qty]*Sales[Price])" # Update expression
pbi measure delete "Old Measure" -t Sales # Delete
pbi measure rename "Old" "New" -t Sales # Rename
pbi measure move "Revenue" -t Sales --to-table Finance # Move to another table
pbi measure export-tmdl "Total Revenue" -t Sales # Export as TMDL
```
## Relationships
```bash
pbi relationship list # List all relationships
pbi relationship get RelName # Get details
pbi relationship create \
--from-table Sales --from-column ProductKey \
--to-table Products --to-column ProductKey # Create relationship
pbi relationship delete RelName # Delete
pbi relationship export-tmdl RelName # Export as TMDL
```
## Hierarchies
```bash
pbi hierarchy list --table Date # List hierarchies
pbi hierarchy get "Calendar" --table Date # Get details
pbi hierarchy create "Calendar" --table Date # Create
pbi hierarchy add-level "Calendar" --table Date --column Year --ordinal 0 # Add level
pbi hierarchy delete "Calendar" --table Date # Delete
```
## Calculation Groups
```bash
pbi calc-group list # List calculation groups
pbi calc-group create "Time Intelligence" --description "Time calcs" # Create group
pbi calc-group items "Time Intelligence" # List items
pbi calc-group create-item "YTD" \
--group "Time Intelligence" \
--expression "CALCULATE(SELECTEDMEASURE(), DATESYTD(Calendar[Date]))" # Add item
pbi calc-group delete "Time Intelligence" # Delete group
```
## Workflow: Create a Star Schema
```bash
# 1. Create fact table
pbi table create Sales --mode Import
# 2. Create dimension tables
pbi table create Products --mode Import
pbi table create Calendar --mode Import
# 3. Create relationships
pbi relationship create --from-table Sales --from-column ProductKey --to-table Products --to-column ProductKey
pbi relationship create --from-table Sales --from-column DateKey --to-table Calendar --to-column DateKey
# 4. Mark date table
pbi table mark-date Calendar --date-column Date
# 5. Add measures
pbi measure create "Total Revenue" -e "SUM(Sales[Revenue])" -t Sales --format-string "\$#,##0"
pbi measure create "Total Qty" -e "SUM(Sales[Quantity])" -t Sales --format-string "#,##0"
pbi measure create "Avg Price" -e "AVERAGE(Sales[UnitPrice])" -t Sales --format-string "\$#,##0.00"
# 6. Verify
pbi table list
pbi measure list
pbi relationship list
```
## Best Practices
- Use format strings for currency (`$#,##0`), percentage (`0.0%`), and integer (`#,##0`) measures
- Organize measures into display folders by business domain
- Always mark calendar tables with `mark-date` for time intelligence
- Use `--json` flag when scripting: `pbi --json measure list`
- Export TMDL for version control: `pbi table export-tmdl Sales`

View file

@ -0,0 +1,116 @@
---
name: Power BI Security
description: Configure row-level security (RLS) roles, manage object-level security, and set up perspectives for Power BI semantic models. Use when the user mentions Power BI security, RLS, access control, or data restrictions.
tools: pbi-cli
---
# Power BI Security Skill
Manage row-level security (RLS) and perspectives for Power BI models.
## Prerequisites
```bash
pip install pbi-cli
pbi setup
pbi connect --data-source localhost:54321
```
## Security Roles (RLS)
```bash
# List all security roles
pbi security-role list
# Get role details
pbi security-role get "Regional Manager"
# Create a new role
pbi security-role create "Regional Manager" \
--description "Restricts data to user's region"
# Delete a role
pbi security-role delete "Regional Manager"
# Export role as TMDL
pbi security-role export-tmdl "Regional Manager"
```
## Perspectives
Perspectives control which tables and columns are visible to users:
```bash
# List all perspectives
pbi perspective list
# Create a perspective
pbi perspective create "Sales View"
# Delete a perspective
pbi perspective delete "Sales View"
```
## Workflow: Set Up RLS
```bash
# 1. Create roles
pbi security-role create "Sales Team" --description "Sales data only"
pbi security-role create "Finance Team" --description "Finance data only"
# 2. Verify roles were created
pbi --json security-role list
# 3. Export for version control
pbi security-role export-tmdl "Sales Team"
pbi security-role export-tmdl "Finance Team"
```
## Workflow: Create User-Focused Perspectives
```bash
# 1. Create perspectives for different audiences
pbi perspective create "Executive Dashboard"
pbi perspective create "Sales Detail"
pbi perspective create "Finance Overview"
# 2. Verify
pbi --json perspective list
```
## Common RLS Patterns
### Region-Based Security
Create a role that filters by the authenticated user's region:
```bash
pbi security-role create "Region Filter" \
--description "Users see only their region's data"
```
Then define table permissions with DAX filter expressions in the model (via TMDL or Power BI Desktop).
### Department-Based Security
```bash
pbi security-role create "Department Filter" \
--description "Users see only their department's data"
```
### Manager Hierarchy
```bash
pbi security-role create "Manager View" \
--description "Managers see their direct reports' data"
```
## Best Practices
- Create roles with clear, descriptive names
- Always add descriptions explaining the access restriction
- Export roles as TMDL for version control
- Test RLS thoroughly before publishing to production
- Use perspectives to simplify the model for different user groups
- Document role-to-group mappings externally (RLS roles map to Azure AD groups in Power BI Service)
- Use `--json` output for automated security audits: `pbi --json security-role list`

View file

@ -2,12 +2,10 @@
from __future__ import annotations
import os
import platform
import stat
from pathlib import Path
# Maps (system, machine) to VS Marketplace target platform identifier.
PLATFORM_MAP: dict[tuple[str, str], str] = {
("Windows", "AMD64"): "win32-x64",

157
src/pbi_cli/utils/repl.py Normal file
View file

@ -0,0 +1,157 @@
"""Interactive REPL for pbi-cli with persistent MCP connection.
Keeps the Power BI MCP server process alive across commands so that
subsequent calls skip the startup cost (~2-3 seconds per invocation).
Usage:
pbi repl
pbi --json repl
"""
from __future__ import annotations
import platform
import shlex
import click
from prompt_toolkit import PromptSession
from prompt_toolkit.completion import WordCompleter
from prompt_toolkit.history import FileHistory
from pbi_cli.core.config import PBI_CLI_HOME, ensure_home_dir
from pbi_cli.core.connection_store import load_connections
from pbi_cli.core.mcp_client import get_shared_client
from pbi_cli.core.output import print_error, print_info, print_warning
_QUIT_COMMANDS = frozenset({"exit", "quit", "q"})
_HISTORY_FILE = PBI_CLI_HOME / "repl_history"
class PbiRepl:
"""Interactive REPL that dispatches input to Click commands."""
def __init__(
self,
json_output: bool = False,
connection: str | None = None,
) -> None:
self._json_output = json_output
self._connection = connection
def _build_completer(self) -> WordCompleter:
"""Build auto-completer from registered Click commands."""
from pbi_cli.main import cli
words: list[str] = []
for name, cmd in cli.commands.items():
words.append(name)
if isinstance(cmd, click.MultiCommand):
sub_names = cmd.list_commands(click.Context(cmd))
for sub in sub_names:
words.append(f"{name} {sub}")
words.append(sub)
return WordCompleter(words, ignore_case=True)
def _get_prompt(self) -> str:
"""Dynamic prompt showing active connection name."""
store = load_connections()
if store.last_used:
return f"pbi({store.last_used})> "
return "pbi> "
def _execute_line(self, line: str) -> None:
"""Parse and execute a single command line."""
from pbi_cli.main import PbiContext, cli
stripped = line.strip()
if not stripped:
return
if stripped.lower() in _QUIT_COMMANDS:
raise EOFError
# Tokenize -- posix=False on Windows to handle backslash paths
is_posix = platform.system() != "Windows"
try:
tokens = shlex.split(stripped, posix=is_posix)
except ValueError as e:
print_error(f"Parse error: {e}")
return
# Strip leading "pbi" if user types full command out of habit
if tokens and tokens[0] == "pbi":
tokens = tokens[1:]
if not tokens:
return
# Build a fresh Click context per invocation to avoid state leaking
try:
ctx = click.Context(cli, info_name="pbi")
ctx.ensure_object(PbiContext)
ctx.obj = PbiContext(
json_output=self._json_output,
connection=self._connection,
repl_mode=True,
)
with ctx:
cli.parse_args(ctx, list(tokens))
cli.invoke(ctx)
except SystemExit:
# Click raises SystemExit on --help, bad args, etc.
pass
except click.ClickException as e:
e.show()
except click.Abort:
print_warning("Aborted.")
except KeyboardInterrupt:
# Ctrl+C cancels current command, not the REPL
pass
except Exception as e:
print_error(str(e))
def run(self) -> None:
"""Main REPL loop."""
ensure_home_dir()
session: PromptSession[str] = PromptSession(
history=FileHistory(str(_HISTORY_FILE)),
completer=self._build_completer(),
)
print_info("pbi-cli interactive mode. Type 'exit' or Ctrl+D to quit.")
# Pre-warm the shared MCP server
try:
client = get_shared_client()
client.start()
except Exception as e:
print_warning(f"Could not pre-warm MCP server: {e}")
print_info("Commands will start the server on first use.")
try:
while True:
try:
line = session.prompt(self._get_prompt())
self._execute_line(line)
except KeyboardInterrupt:
# Ctrl+C on empty prompt prints hint
print_info("Type 'exit' or press Ctrl+D to quit.")
continue
except EOFError:
pass
finally:
# Shut down the shared MCP server
try:
get_shared_client().stop()
except Exception:
pass
print_info("Goodbye.")
def start_repl(
json_output: bool = False,
connection: str | None = None,
) -> None:
"""Entry point called by the ``pbi repl`` command."""
repl = PbiRepl(json_output=json_output, connection=connection)
repl.run()

152
tests/conftest.py Normal file
View file

@ -0,0 +1,152 @@
"""Shared test fixtures for pbi-cli."""
from __future__ import annotations
from pathlib import Path
from typing import Any
import pytest
from click.testing import CliRunner
# ---------------------------------------------------------------------------
# Canned MCP responses used by the mock client
# ---------------------------------------------------------------------------
CANNED_RESPONSES: dict[str, dict[str, Any]] = {
"connection_operations": {
"Connect": {"status": "connected", "connectionName": "test-conn"},
"ConnectFabric": {"status": "connected", "connectionName": "ws/model"},
"Disconnect": {"status": "disconnected"},
},
"dax_query_operations": {
"Execute": {"columns": ["Amount"], "rows": [{"Amount": 42}]},
"Validate": {"isValid": True},
"ClearCache": {"status": "cleared"},
},
"measure_operations": {
"List": [
{"name": "Total Sales", "expression": "SUM(Sales[Amount])", "tableName": "Sales"},
],
"Get": {"name": "Total Sales", "expression": "SUM(Sales[Amount])"},
"Create": {"status": "created"},
"Update": {"status": "updated"},
"Delete": {"status": "deleted"},
"Rename": {"status": "renamed"},
"Move": {"status": "moved"},
"ExportTMDL": "measure 'Total Sales'\n expression = SUM(Sales[Amount])",
},
"table_operations": {
"List": [{"name": "Sales", "mode": "Import"}],
"Get": {"name": "Sales", "mode": "Import", "columns": []},
"Create": {"status": "created"},
"Delete": {"status": "deleted"},
"Refresh": {"status": "refreshed"},
"GetSchema": {"name": "Sales", "columns": [{"name": "Amount", "type": "double"}]},
"ExportTMDL": "table Sales\n mode: Import",
"Rename": {"status": "renamed"},
"MarkAsDateTable": {"status": "marked"},
},
"model_operations": {
"Get": {"name": "My Model", "compatibilityLevel": 1600},
"GetStats": {"tables": 5, "measures": 10, "columns": 30},
"Refresh": {"status": "refreshed"},
"Rename": {"status": "renamed"},
"ExportTMDL": "model Model\n culture: en-US",
},
"column_operations": {
"List": [{"name": "Amount", "tableName": "Sales", "dataType": "double"}],
"Get": {"name": "Amount", "dataType": "double"},
"Create": {"status": "created"},
"Update": {"status": "updated"},
"Delete": {"status": "deleted"},
"Rename": {"status": "renamed"},
"ExportTMDL": "column Amount\n dataType: double",
},
}
# ---------------------------------------------------------------------------
# Mock MCP client
# ---------------------------------------------------------------------------
class MockPbiMcpClient:
"""Fake MCP client returning canned responses without spawning a process."""
def __init__(self, responses: dict[str, dict[str, Any]] | None = None) -> None:
self.responses = responses or CANNED_RESPONSES
self.started = False
self.stopped = False
self.calls: list[tuple[str, dict[str, Any]]] = []
def start(self) -> None:
self.started = True
def stop(self) -> None:
self.stopped = True
def call_tool(self, tool_name: str, request: dict[str, Any]) -> Any:
self.calls.append((tool_name, request))
operation = request.get("operation", "")
tool_responses = self.responses.get(tool_name, {})
if operation in tool_responses:
return tool_responses[operation]
return {"status": "ok"}
def list_tools(self) -> list[dict[str, Any]]:
return [
{"name": "measure_operations", "description": "Measure CRUD"},
{"name": "table_operations", "description": "Table CRUD"},
{"name": "dax_query_operations", "description": "DAX queries"},
]
# ---------------------------------------------------------------------------
# Fixtures
# ---------------------------------------------------------------------------
@pytest.fixture
def mock_client() -> MockPbiMcpClient:
"""A fresh mock MCP client."""
return MockPbiMcpClient()
@pytest.fixture
def patch_get_client(monkeypatch: pytest.MonkeyPatch, mock_client: MockPbiMcpClient) -> MockPbiMcpClient:
"""Monkeypatch get_client in _helpers and connection modules."""
factory = lambda repl_mode=False: mock_client # noqa: E731
monkeypatch.setattr("pbi_cli.commands._helpers.get_client", factory)
monkeypatch.setattr("pbi_cli.commands.connection.get_client", factory)
# Also patch dax.py which calls get_client directly
monkeypatch.setattr("pbi_cli.commands.dax.get_client", factory)
return mock_client
@pytest.fixture
def cli_runner() -> CliRunner:
"""Click test runner with separated stdout/stderr."""
return CliRunner()
@pytest.fixture
def tmp_config(tmp_path: Path, monkeypatch: pytest.MonkeyPatch) -> Path:
"""Redirect PBI_CLI_HOME and CONFIG_FILE to a temp directory."""
monkeypatch.setattr("pbi_cli.core.config.PBI_CLI_HOME", tmp_path)
monkeypatch.setattr("pbi_cli.core.config.CONFIG_FILE", tmp_path / "config.json")
return tmp_path
@pytest.fixture
def tmp_connections(tmp_path: Path, monkeypatch: pytest.MonkeyPatch) -> Path:
"""Redirect CONNECTIONS_FILE to a temp directory."""
conn_file = tmp_path / "connections.json"
monkeypatch.setattr("pbi_cli.core.connection_store.CONNECTIONS_FILE", conn_file)
monkeypatch.setattr("pbi_cli.core.connection_store.PBI_CLI_HOME", tmp_path)
monkeypatch.setattr("pbi_cli.core.config.PBI_CLI_HOME", tmp_path)
monkeypatch.setattr("pbi_cli.core.config.CONFIG_FILE", tmp_path / "config.json")
return tmp_path

View file

@ -0,0 +1,92 @@
"""Tests for pbi_cli.core.binary_manager."""
from __future__ import annotations
import os
from pathlib import Path
from unittest.mock import patch
import pytest
from pbi_cli.core.binary_manager import (
_binary_source,
_find_managed_binary,
get_binary_info,
resolve_binary,
)
def test_resolve_binary_env_var(tmp_path: Path, monkeypatch: pytest.MonkeyPatch) -> None:
fake_bin = tmp_path / "powerbi-modeling-mcp.exe"
fake_bin.write_text("fake", encoding="utf-8")
monkeypatch.setenv("PBI_MCP_BINARY", str(fake_bin))
result = resolve_binary()
assert result == fake_bin
def test_resolve_binary_env_var_missing_file(monkeypatch: pytest.MonkeyPatch) -> None:
monkeypatch.setenv("PBI_MCP_BINARY", "/nonexistent/path")
with pytest.raises(FileNotFoundError, match="non-existent"):
resolve_binary()
def test_resolve_binary_not_found(
tmp_config: Path, monkeypatch: pytest.MonkeyPatch
) -> None:
monkeypatch.delenv("PBI_MCP_BINARY", raising=False)
with patch("pbi_cli.core.binary_manager.find_vscode_extension_binary", return_value=None):
with pytest.raises(FileNotFoundError, match="not found"):
resolve_binary()
def test_find_managed_binary(tmp_config: Path) -> None:
bin_dir = tmp_config / "bin" / "0.4.0"
bin_dir.mkdir(parents=True)
fake_bin = bin_dir / "powerbi-modeling-mcp.exe"
fake_bin.write_text("fake", encoding="utf-8")
with patch("pbi_cli.core.binary_manager.PBI_CLI_HOME", tmp_config), \
patch("pbi_cli.core.binary_manager.binary_name", return_value="powerbi-modeling-mcp.exe"):
result = _find_managed_binary()
assert result is not None
assert result.name == "powerbi-modeling-mcp.exe"
def test_find_managed_binary_empty_dir(tmp_config: Path) -> None:
bin_dir = tmp_config / "bin"
bin_dir.mkdir(parents=True)
with patch("pbi_cli.core.binary_manager.PBI_CLI_HOME", tmp_config):
result = _find_managed_binary()
assert result is None
def test_binary_source_env_var(monkeypatch: pytest.MonkeyPatch) -> None:
monkeypatch.setenv("PBI_MCP_BINARY", "/some/path")
result = _binary_source(Path("/some/path"))
assert "environment variable" in result
def test_binary_source_managed() -> None:
with patch.dict(os.environ, {}, clear=False):
if "PBI_MCP_BINARY" in os.environ:
del os.environ["PBI_MCP_BINARY"]
result = _binary_source(Path("/home/user/.pbi-cli/bin/0.4.0/binary"))
assert "managed" in result
def test_binary_source_vscode() -> None:
with patch.dict(os.environ, {}, clear=False):
if "PBI_MCP_BINARY" in os.environ:
del os.environ["PBI_MCP_BINARY"]
result = _binary_source(Path("/home/user/.vscode/extensions/ext/server/binary"))
assert "VS Code" in result
def test_get_binary_info_not_found(tmp_config: Path, monkeypatch: pytest.MonkeyPatch) -> None:
monkeypatch.delenv("PBI_MCP_BINARY", raising=False)
with patch("pbi_cli.core.binary_manager.find_vscode_extension_binary", return_value=None):
info = get_binary_info()
assert info["binary_path"] == "not found"
assert info["version"] == "none"

View file

@ -0,0 +1,82 @@
"""Tests for connection commands."""
from __future__ import annotations
from pathlib import Path
from click.testing import CliRunner
from pbi_cli.main import cli
from tests.conftest import MockPbiMcpClient
def test_connect_success(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["connect", "-d", "localhost:54321"])
assert result.exit_code == 0
assert len(patch_get_client.calls) == 1
assert patch_get_client.calls[0][0] == "connection_operations"
def test_connect_json_output(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "connect", "-d", "localhost:54321"])
assert result.exit_code == 0
assert "connected" in result.output
def test_connect_fabric(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(
cli, ["connect-fabric", "-w", "My Workspace", "-m", "My Model"]
)
assert result.exit_code == 0
assert patch_get_client.calls[0][1]["operation"] == "ConnectFabric"
def test_disconnect(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
# First connect, then disconnect
cli_runner.invoke(cli, ["connect", "-d", "localhost:54321"])
result = cli_runner.invoke(cli, ["disconnect"])
assert result.exit_code == 0
def test_disconnect_no_active_connection(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["disconnect"])
assert result.exit_code != 0
def test_connections_list_empty(
cli_runner: CliRunner,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["connections", "list"])
assert result.exit_code == 0
def test_connections_list_json(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
cli_runner.invoke(cli, ["connect", "-d", "localhost:54321"])
result = cli_runner.invoke(cli, ["--json", "connections", "list"])
assert result.exit_code == 0
assert "localhost" in result.output

View file

@ -0,0 +1,65 @@
"""Tests for DAX commands."""
from __future__ import annotations
from pathlib import Path
from click.testing import CliRunner
from pbi_cli.main import cli
from tests.conftest import MockPbiMcpClient
def test_dax_execute(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "dax", "execute", "EVALUATE Sales"])
assert result.exit_code == 0
assert "42" in result.output
def test_dax_execute_from_file(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
tmp_path: Path,
) -> None:
query_file = tmp_path / "query.dax"
query_file.write_text("EVALUATE Sales", encoding="utf-8")
result = cli_runner.invoke(
cli, ["--json", "dax", "execute", "--file", str(query_file)]
)
assert result.exit_code == 0
def test_dax_execute_no_query(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["dax", "execute"])
assert result.exit_code != 0
def test_dax_validate(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(
cli, ["--json", "dax", "validate", "EVALUATE Sales"]
)
assert result.exit_code == 0
assert "isValid" in result.output
def test_dax_clear_cache(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "dax", "clear-cache"])
assert result.exit_code == 0

View file

@ -0,0 +1,104 @@
"""Tests for measure commands."""
from __future__ import annotations
from pathlib import Path
from click.testing import CliRunner
from pbi_cli.main import cli
from tests.conftest import MockPbiMcpClient
def test_measure_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "measure", "list"])
assert result.exit_code == 0
assert "Total Sales" in result.output
def test_measure_list_with_table_filter(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(
cli, ["--json", "measure", "list", "--table", "Sales"]
)
assert result.exit_code == 0
def test_measure_get(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(
cli, ["--json", "measure", "get", "Total Sales", "--table", "Sales"]
)
assert result.exit_code == 0
assert "Total Sales" in result.output
def test_measure_create(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, [
"--json", "measure", "create", "Revenue",
"-e", "SUM(Sales[Revenue])",
"-t", "Sales",
])
assert result.exit_code == 0
def test_measure_update(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, [
"--json", "measure", "update", "Revenue",
"-t", "Sales",
"-e", "SUM(Sales[Amount])",
])
assert result.exit_code == 0
def test_measure_delete(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, [
"--json", "measure", "delete", "Revenue", "-t", "Sales",
])
assert result.exit_code == 0
def test_measure_rename(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, [
"--json", "measure", "rename", "OldName", "NewName", "-t", "Sales",
])
assert result.exit_code == 0
def test_measure_move(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, [
"--json", "measure", "move", "Revenue",
"-t", "Sales",
"--to-table", "Finance",
])
assert result.exit_code == 0

View file

@ -0,0 +1,151 @@
"""Tests for remaining command groups to boost coverage."""
from __future__ import annotations
from pathlib import Path
from click.testing import CliRunner
from pbi_cli.main import cli
from tests.conftest import MockPbiMcpClient
def test_column_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(
cli, ["--json", "column", "list", "--table", "Sales"]
)
assert result.exit_code == 0
def test_relationship_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "relationship", "list"])
assert result.exit_code == 0
def test_database_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "database", "list"])
assert result.exit_code == 0
def test_security_role_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "security-role", "list"])
assert result.exit_code == 0
def test_calc_group_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "calc-group", "list"])
assert result.exit_code == 0
def test_partition_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(
cli, ["--json", "partition", "list", "--table", "Sales"]
)
assert result.exit_code == 0
def test_perspective_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "perspective", "list"])
assert result.exit_code == 0
def test_hierarchy_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(
cli, ["--json", "hierarchy", "list", "--table", "Date"]
)
assert result.exit_code == 0
def test_expression_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "expression", "list"])
assert result.exit_code == 0
def test_calendar_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "calendar", "list"])
assert result.exit_code == 0
def test_trace_start(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "trace", "start"])
assert result.exit_code == 0
def test_transaction_begin(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "transaction", "begin"])
assert result.exit_code == 0
def test_transaction_commit(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "transaction", "commit"])
assert result.exit_code == 0
def test_transaction_rollback(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "transaction", "rollback"])
assert result.exit_code == 0
def test_advanced_culture_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "advanced", "culture", "list"])
assert result.exit_code == 0

View file

@ -0,0 +1,38 @@
"""Tests for model commands."""
from __future__ import annotations
from pathlib import Path
from click.testing import CliRunner
from pbi_cli.main import cli
from tests.conftest import MockPbiMcpClient
def test_model_get(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "model", "get"])
assert result.exit_code == 0
assert "My Model" in result.output
def test_model_stats(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "model", "stats"])
assert result.exit_code == 0
def test_model_refresh(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "model", "refresh"])
assert result.exit_code == 0

View file

@ -0,0 +1,95 @@
"""Tests for REPL functionality (non-interactive parts)."""
from __future__ import annotations
from pathlib import Path
from unittest.mock import patch
from click.testing import CliRunner
from pbi_cli.main import cli
from pbi_cli.utils.repl import PbiRepl
def test_repl_command_exists(cli_runner: CliRunner) -> None:
result = cli_runner.invoke(cli, ["repl", "--help"])
assert result.exit_code == 0
assert "interactive REPL" in result.output
def test_repl_build_completer() -> None:
repl = PbiRepl()
completer = repl._build_completer()
# Should contain known commands
assert "measure" in completer.words
assert "dax" in completer.words
assert "connect" in completer.words
assert "repl" in completer.words
def test_repl_get_prompt_no_connection(tmp_connections: Path) -> None:
repl = PbiRepl()
prompt = repl._get_prompt()
assert prompt == "pbi> "
def test_repl_get_prompt_with_connection(tmp_connections: Path) -> None:
from pbi_cli.core.connection_store import (
ConnectionInfo,
ConnectionStore,
add_connection,
save_connections,
)
store = add_connection(
ConnectionStore(),
ConnectionInfo(name="test-conn", data_source="localhost"),
)
save_connections(store)
repl = PbiRepl()
prompt = repl._get_prompt()
assert prompt == "pbi(test-conn)> "
def test_repl_execute_line_empty() -> None:
repl = PbiRepl()
# Should not raise
repl._execute_line("")
repl._execute_line(" ")
def test_repl_execute_line_exit() -> None:
repl = PbiRepl()
import pytest
with pytest.raises(EOFError):
repl._execute_line("exit")
def test_repl_execute_line_quit() -> None:
repl = PbiRepl()
import pytest
with pytest.raises(EOFError):
repl._execute_line("quit")
def test_repl_execute_line_strips_pbi_prefix(
monkeypatch: "pytest.MonkeyPatch",
tmp_connections: Path,
) -> None:
import pytest
from tests.conftest import MockPbiMcpClient
mock = MockPbiMcpClient()
factory = lambda repl_mode=False: mock # noqa: E731
monkeypatch.setattr("pbi_cli.commands._helpers.get_client", factory)
repl = PbiRepl(json_output=True)
# "pbi measure list" should work like "measure list"
repl._execute_line("pbi --json measure list")
def test_repl_execute_line_help() -> None:
repl = PbiRepl()
# --help should not crash the REPL (Click raises SystemExit)
repl._execute_line("--help")

View file

@ -0,0 +1,42 @@
"""Tests for setup command."""
from __future__ import annotations
from pathlib import Path
from unittest.mock import patch
from click.testing import CliRunner
from pbi_cli.main import cli
def test_setup_info(cli_runner: CliRunner, tmp_config: Path) -> None:
fake_info = {
"binary_path": "/test/binary",
"version": "0.4.0",
"platform": "win32-x64",
"source": "managed",
}
with patch("pbi_cli.commands.setup_cmd.get_binary_info", return_value=fake_info):
result = cli_runner.invoke(cli, ["--json", "setup", "--info"])
assert result.exit_code == 0
assert "0.4.0" in result.output
def test_setup_check(cli_runner: CliRunner, tmp_config: Path) -> None:
with patch(
"pbi_cli.commands.setup_cmd.check_for_updates",
return_value=("0.3.0", "0.4.0", True),
):
result = cli_runner.invoke(cli, ["--json", "setup", "--check"])
assert result.exit_code == 0
assert "0.4.0" in result.output
def test_setup_check_up_to_date(cli_runner: CliRunner, tmp_config: Path) -> None:
with patch(
"pbi_cli.commands.setup_cmd.check_for_updates",
return_value=("0.4.0", "0.4.0", False),
):
result = cli_runner.invoke(cli, ["setup", "--check"])
assert result.exit_code == 0

View file

@ -0,0 +1,58 @@
"""Tests for table commands."""
from __future__ import annotations
from pathlib import Path
from click.testing import CliRunner
from pbi_cli.main import cli
from tests.conftest import MockPbiMcpClient
def test_table_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "table", "list"])
assert result.exit_code == 0
assert "Sales" in result.output
def test_table_get(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "table", "get", "Sales"])
assert result.exit_code == 0
def test_table_create(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, [
"--json", "table", "create", "NewTable", "--mode", "Import",
])
assert result.exit_code == 0
def test_table_delete(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "table", "delete", "OldTable"])
assert result.exit_code == 0
def test_table_refresh(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "table", "refresh", "Sales"])
assert result.exit_code == 0

63
tests/test_config.py Normal file
View file

@ -0,0 +1,63 @@
"""Tests for pbi_cli.core.config."""
from __future__ import annotations
import json
from pathlib import Path
from pbi_cli.core.config import PbiConfig, load_config, save_config
def test_default_config() -> None:
config = PbiConfig()
assert config.binary_version == ""
assert config.binary_path == ""
assert config.default_connection == ""
assert config.binary_args == ["--start", "--skipconfirmation"]
def test_with_updates_returns_new_instance() -> None:
original = PbiConfig(binary_version="1.0")
updated = original.with_updates(binary_version="2.0")
assert updated.binary_version == "2.0"
assert original.binary_version == "1.0" # unchanged
def test_with_updates_preserves_other_fields() -> None:
original = PbiConfig(binary_version="1.0", binary_path="/bin/test")
updated = original.with_updates(binary_version="2.0")
assert updated.binary_path == "/bin/test"
def test_load_config_missing_file(tmp_config: Path) -> None:
config = load_config()
assert config.binary_version == ""
assert config.binary_args == ["--start", "--skipconfirmation"]
def test_save_and_load_roundtrip(tmp_config: Path) -> None:
original = PbiConfig(binary_version="0.4.0", binary_path="/test/path")
save_config(original)
loaded = load_config()
assert loaded.binary_version == "0.4.0"
assert loaded.binary_path == "/test/path"
def test_load_config_corrupt_json(tmp_config: Path) -> None:
config_file = tmp_config / "config.json"
config_file.write_text("not valid json{{{", encoding="utf-8")
config = load_config()
assert config.binary_version == "" # falls back to defaults
def test_config_is_frozen() -> None:
config = PbiConfig()
try:
config.binary_version = "new" # type: ignore[misc]
assert False, "Should have raised"
except AttributeError:
pass

View file

@ -0,0 +1,106 @@
"""Tests for pbi_cli.core.connection_store."""
from __future__ import annotations
from pathlib import Path
from pbi_cli.core.connection_store import (
ConnectionInfo,
ConnectionStore,
add_connection,
get_active_connection,
load_connections,
remove_connection,
save_connections,
)
def test_empty_store() -> None:
store = ConnectionStore()
assert store.last_used == ""
assert store.connections == {}
def test_add_connection_returns_new_store() -> None:
store = ConnectionStore()
info = ConnectionInfo(name="test", data_source="localhost:1234")
new_store = add_connection(store, info)
assert "test" in new_store.connections
assert new_store.last_used == "test"
# Original is unchanged
assert store.connections == {}
def test_remove_connection() -> None:
info = ConnectionInfo(name="test", data_source="localhost:1234")
store = ConnectionStore(last_used="test", connections={"test": info})
new_store = remove_connection(store, "test")
assert "test" not in new_store.connections
assert new_store.last_used == ""
def test_remove_connection_clears_last_used_only_if_matching() -> None:
info1 = ConnectionInfo(name="a", data_source="host1")
info2 = ConnectionInfo(name="b", data_source="host2")
store = ConnectionStore(
last_used="a",
connections={"a": info1, "b": info2},
)
new_store = remove_connection(store, "b")
assert new_store.last_used == "a" # unchanged
assert "b" not in new_store.connections
def test_get_active_connection_with_override() -> None:
info = ConnectionInfo(name="test", data_source="localhost")
store = ConnectionStore(connections={"test": info})
result = get_active_connection(store, override="test")
assert result is not None
assert result.name == "test"
def test_get_active_connection_uses_last_used() -> None:
info = ConnectionInfo(name="test", data_source="localhost")
store = ConnectionStore(last_used="test", connections={"test": info})
result = get_active_connection(store)
assert result is not None
assert result.name == "test"
def test_get_active_connection_returns_none() -> None:
store = ConnectionStore()
assert get_active_connection(store) is None
def test_save_and_load_roundtrip(tmp_connections: Path) -> None:
info = ConnectionInfo(
name="my-conn",
data_source="localhost:54321",
initial_catalog="Sales",
)
store = add_connection(ConnectionStore(), info)
save_connections(store)
loaded = load_connections()
assert loaded.last_used == "my-conn"
assert "my-conn" in loaded.connections
assert loaded.connections["my-conn"].data_source == "localhost:54321"
def test_load_connections_missing_file(tmp_connections: Path) -> None:
store = load_connections()
assert store.connections == {}
def test_connection_info_is_frozen() -> None:
info = ConnectionInfo(name="test", data_source="localhost")
try:
info.name = "changed" # type: ignore[misc]
assert False, "Should have raised"
except AttributeError:
pass

50
tests/test_e2e.py Normal file
View file

@ -0,0 +1,50 @@
"""End-to-end tests requiring the real Power BI MCP binary.
These tests are skipped in CI unless a binary is available.
Run with: pytest -m e2e
"""
from __future__ import annotations
import subprocess
import sys
import pytest
pytestmark = pytest.mark.e2e
def _pbi(*args: str) -> subprocess.CompletedProcess[str]:
"""Run a pbi command via subprocess."""
return subprocess.run(
[sys.executable, "-m", "pbi_cli", *args],
capture_output=True,
text=True,
timeout=30,
)
@pytest.fixture(autouse=True)
def _skip_if_no_binary() -> None:
"""Skip all e2e tests if the binary is not available."""
result = _pbi("--json", "setup", "--info")
if "not found" in result.stdout:
pytest.skip("Power BI MCP binary not available")
def test_version() -> None:
result = _pbi("--version")
assert result.returncode == 0
assert "pbi-cli" in result.stdout
def test_help() -> None:
result = _pbi("--help")
assert result.returncode == 0
assert "pbi-cli" in result.stdout
def test_setup_info() -> None:
result = _pbi("--json", "setup", "--info")
assert result.returncode == 0

36
tests/test_errors.py Normal file
View file

@ -0,0 +1,36 @@
"""Tests for pbi_cli.core.errors."""
from __future__ import annotations
import click
from pbi_cli.core.errors import (
BinaryNotFoundError,
ConnectionRequiredError,
McpToolError,
PbiCliError,
)
def test_pbi_cli_error_is_click_exception() -> None:
err = PbiCliError("test message")
assert isinstance(err, click.ClickException)
assert err.format_message() == "test message"
def test_binary_not_found_default_message() -> None:
err = BinaryNotFoundError()
assert "pbi setup" in err.format_message()
def test_connection_required_default_message() -> None:
err = ConnectionRequiredError()
assert "pbi connect" in err.format_message()
def test_mcp_tool_error_includes_tool_name() -> None:
err = McpToolError("measure_operations", "not found")
assert "measure_operations" in err.format_message()
assert "not found" in err.format_message()
assert err.tool_name == "measure_operations"
assert err.detail == "not found"

94
tests/test_helpers.py Normal file
View file

@ -0,0 +1,94 @@
"""Tests for pbi_cli.commands._helpers."""
from __future__ import annotations
from pathlib import Path
import pytest
from pbi_cli.commands._helpers import build_definition, run_tool
from pbi_cli.core.errors import McpToolError
from pbi_cli.main import PbiContext
from tests.conftest import MockPbiMcpClient
def test_build_definition_required_only() -> None:
result = build_definition(
required={"name": "Sales"},
optional={},
)
assert result == {"name": "Sales"}
def test_build_definition_filters_none() -> None:
result = build_definition(
required={"name": "Sales"},
optional={"description": None, "folder": "Finance"},
)
assert result == {"name": "Sales", "folder": "Finance"}
assert "description" not in result
def test_build_definition_preserves_falsy_non_none() -> None:
result = build_definition(
required={"name": "Sales"},
optional={"hidden": False, "count": 0, "label": ""},
)
assert result["hidden"] is False
assert result["count"] == 0
assert result["label"] == ""
def test_run_tool_adds_connection(monkeypatch: pytest.MonkeyPatch) -> None:
mock = MockPbiMcpClient()
monkeypatch.setattr("pbi_cli.commands._helpers.get_client", lambda repl_mode=False: mock)
ctx = PbiContext(json_output=True, connection="my-conn")
run_tool(ctx, "measure_operations", {"operation": "List"})
assert mock.calls[0][1]["connectionName"] == "my-conn"
def test_run_tool_no_connection(monkeypatch: pytest.MonkeyPatch) -> None:
mock = MockPbiMcpClient()
monkeypatch.setattr("pbi_cli.commands._helpers.get_client", lambda repl_mode=False: mock)
ctx = PbiContext(json_output=True)
run_tool(ctx, "measure_operations", {"operation": "List"})
assert "connectionName" not in mock.calls[0][1]
def test_run_tool_stops_client_in_oneshot(monkeypatch: pytest.MonkeyPatch) -> None:
mock = MockPbiMcpClient()
monkeypatch.setattr("pbi_cli.commands._helpers.get_client", lambda repl_mode=False: mock)
ctx = PbiContext(json_output=True, repl_mode=False)
run_tool(ctx, "measure_operations", {"operation": "List"})
assert mock.stopped is True
def test_run_tool_keeps_client_in_repl(monkeypatch: pytest.MonkeyPatch) -> None:
mock = MockPbiMcpClient()
monkeypatch.setattr("pbi_cli.commands._helpers.get_client", lambda repl_mode=False: mock)
ctx = PbiContext(json_output=True, repl_mode=True)
run_tool(ctx, "measure_operations", {"operation": "List"})
assert mock.stopped is False
def test_run_tool_raises_mcp_tool_error_on_failure(
monkeypatch: pytest.MonkeyPatch,
) -> None:
class FailingClient(MockPbiMcpClient):
def call_tool(self, tool_name: str, request: dict) -> None:
raise RuntimeError("server crashed")
mock = FailingClient()
monkeypatch.setattr("pbi_cli.commands._helpers.get_client", lambda repl_mode=False: mock)
ctx = PbiContext(json_output=True)
with pytest.raises(McpToolError):
run_tool(ctx, "measure_operations", {"operation": "List"})

86
tests/test_mcp_client.py Normal file
View file

@ -0,0 +1,86 @@
"""Tests for pbi_cli.core.mcp_client (unit-level, no real server)."""
from __future__ import annotations
import json
from pbi_cli.core.mcp_client import (
_extract_text,
_parse_content,
get_client,
get_shared_client,
)
# ---------------------------------------------------------------------------
# _parse_content tests
# ---------------------------------------------------------------------------
class FakeTextContent:
"""Mimics mcp TextContent blocks."""
def __init__(self, text: str) -> None:
self.text = text
def test_parse_content_single_json() -> None:
blocks = [FakeTextContent('{"name": "Sales"}')]
result = _parse_content(blocks)
assert result == {"name": "Sales"}
def test_parse_content_single_plain_text() -> None:
blocks = [FakeTextContent("just a string")]
result = _parse_content(blocks)
assert result == "just a string"
def test_parse_content_multiple_blocks() -> None:
blocks = [FakeTextContent("hello"), FakeTextContent(" world")]
result = _parse_content(blocks)
assert result == "hello\n world"
def test_parse_content_non_list() -> None:
result = _parse_content("raw value")
assert result == "raw value"
def test_parse_content_json_array() -> None:
blocks = [FakeTextContent('[{"a": 1}]')]
result = _parse_content(blocks)
assert result == [{"a": 1}]
# ---------------------------------------------------------------------------
# _extract_text tests
# ---------------------------------------------------------------------------
def test_extract_text_from_blocks() -> None:
blocks = [FakeTextContent("error occurred")]
result = _extract_text(blocks)
assert result == "error occurred"
def test_extract_text_non_list() -> None:
result = _extract_text("plain error")
assert result == "plain error"
# ---------------------------------------------------------------------------
# get_client / get_shared_client tests
# ---------------------------------------------------------------------------
def test_get_client_oneshot_returns_fresh() -> None:
c1 = get_client(repl_mode=False)
c2 = get_client(repl_mode=False)
assert c1 is not c2
def test_get_shared_client_returns_same_instance() -> None:
c1 = get_shared_client()
c2 = get_shared_client()
assert c1 is c2

73
tests/test_output.py Normal file
View file

@ -0,0 +1,73 @@
"""Tests for pbi_cli.core.output."""
from __future__ import annotations
import json
from pbi_cli.core.output import format_mcp_result, print_json
def test_print_json_outputs_valid_json(capsys: object) -> None:
import sys
from io import StringIO
old_stdout = sys.stdout
sys.stdout = buf = StringIO()
try:
print_json({"key": "value"})
finally:
sys.stdout = old_stdout
parsed = json.loads(buf.getvalue())
assert parsed == {"key": "value"}
def test_print_json_handles_non_serializable(capsys: object) -> None:
import sys
from io import StringIO
from pathlib import Path
old_stdout = sys.stdout
sys.stdout = buf = StringIO()
try:
print_json({"path": Path("/tmp")})
finally:
sys.stdout = old_stdout
parsed = json.loads(buf.getvalue())
assert "tmp" in parsed["path"]
def test_format_mcp_result_json_mode(capsys: object) -> None:
import sys
from io import StringIO
old_stdout = sys.stdout
sys.stdout = buf = StringIO()
try:
format_mcp_result({"name": "Sales"}, json_output=True)
finally:
sys.stdout = old_stdout
parsed = json.loads(buf.getvalue())
assert parsed["name"] == "Sales"
def test_format_mcp_result_empty_list() -> None:
# Should not raise; prints "No results." to stderr
format_mcp_result([], json_output=False)
def test_format_mcp_result_dict() -> None:
# Should not raise; prints key-value panel
format_mcp_result({"name": "Test"}, json_output=False)
def test_format_mcp_result_list_of_dicts() -> None:
# Should not raise; prints table
format_mcp_result([{"name": "A"}, {"name": "B"}], json_output=False)
def test_format_mcp_result_string() -> None:
# Should not raise; prints string
format_mcp_result("some text", json_output=False)

91
tests/test_platform.py Normal file
View file

@ -0,0 +1,91 @@
"""Tests for pbi_cli.utils.platform."""
from __future__ import annotations
from pathlib import Path
from unittest.mock import patch
import pytest
from pbi_cli.utils.platform import (
binary_name,
detect_platform,
ensure_executable,
find_vscode_extension_binary,
)
def test_detect_platform_windows() -> None:
with patch("pbi_cli.utils.platform.platform.system", return_value="Windows"), \
patch("pbi_cli.utils.platform.platform.machine", return_value="AMD64"):
assert detect_platform() == "win32-x64"
def test_detect_platform_macos_arm() -> None:
with patch("pbi_cli.utils.platform.platform.system", return_value="Darwin"), \
patch("pbi_cli.utils.platform.platform.machine", return_value="arm64"):
assert detect_platform() == "darwin-arm64"
def test_detect_platform_linux_x64() -> None:
with patch("pbi_cli.utils.platform.platform.system", return_value="Linux"), \
patch("pbi_cli.utils.platform.platform.machine", return_value="x86_64"):
assert detect_platform() == "linux-x64"
def test_detect_platform_unsupported() -> None:
with patch("pbi_cli.utils.platform.platform.system", return_value="FreeBSD"), \
patch("pbi_cli.utils.platform.platform.machine", return_value="sparc"):
with pytest.raises(ValueError, match="Unsupported platform"):
detect_platform()
def test_binary_name_windows() -> None:
with patch("pbi_cli.utils.platform.platform.system", return_value="Windows"):
assert binary_name() == "powerbi-modeling-mcp.exe"
def test_binary_name_unix() -> None:
with patch("pbi_cli.utils.platform.platform.system", return_value="Linux"):
assert binary_name() == "powerbi-modeling-mcp"
def test_binary_name_unsupported() -> None:
with patch("pbi_cli.utils.platform.platform.system", return_value="FreeBSD"):
with pytest.raises(ValueError, match="Unsupported OS"):
binary_name()
def test_ensure_executable_noop_on_windows(tmp_path: Path) -> None:
f = tmp_path / "test.exe"
f.write_text("fake", encoding="utf-8")
with patch("pbi_cli.utils.platform.platform.system", return_value="Windows"):
ensure_executable(f) # should be a no-op
def test_find_vscode_extension_binary_no_dir(tmp_path: Path) -> None:
with patch("pbi_cli.utils.platform.Path.home", return_value=tmp_path):
result = find_vscode_extension_binary()
assert result is None
def test_find_vscode_extension_binary_no_match(tmp_path: Path) -> None:
ext_dir = tmp_path / ".vscode" / "extensions"
ext_dir.mkdir(parents=True)
with patch("pbi_cli.utils.platform.Path.home", return_value=tmp_path):
result = find_vscode_extension_binary()
assert result is None
def test_find_vscode_extension_binary_found(tmp_path: Path) -> None:
ext_name = "analysis-services.powerbi-modeling-mcp-0.4.0"
server_dir = tmp_path / ".vscode" / "extensions" / ext_name / "server"
server_dir.mkdir(parents=True)
fake_bin = server_dir / "powerbi-modeling-mcp.exe"
fake_bin.write_text("fake", encoding="utf-8")
with patch("pbi_cli.utils.platform.Path.home", return_value=tmp_path), \
patch("pbi_cli.utils.platform.binary_name", return_value="powerbi-modeling-mcp.exe"):
result = find_vscode_extension_binary()
assert result is not None
assert result.name == "powerbi-modeling-mcp.exe"