feat: v2.0.0 - replace MCP server with direct pythonnet/.NET TOM interop

Remove the MCP server binary dependency entirely. All 22 command groups now
connect directly to Power BI Desktop's Analysis Services engine via pythonnet
and bundled Microsoft.AnalysisServices DLLs (~20MB, in-process).

- Direct .NET TOM/ADOMD.NET interop for sub-second command execution
- 7 Claude Code skills (added Diagnostics and Partitions & Expressions)
- New commands: trace, transaction, calendar, expression, partition, advanced culture
- 91 tests passing, all skills updated, README/CHANGELOG rewritten
This commit is contained in:
MinaSaad1 2026-03-27 07:19:21 +02:00
parent d9951aeecc
commit b777adec55
69 changed files with 3232 additions and 2399 deletions

View file

@ -5,6 +5,35 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [2.0.0] - 2026-03-27
### Breaking
- Removed MCP server dependency entirely (no more `powerbi-modeling-mcp` binary)
- Removed `connect-fabric` command (future work)
- Removed per-object TMDL export (`table export-tmdl`, `measure export-tmdl`, etc.) -- use `pbi database export-tmdl`
- Removed `model refresh` command
- Removed `security-role export-tmdl` -- use `pbi database export-tmdl`
### Added
- Direct pythonnet/.NET TOM interop (in-process, sub-second commands)
- Bundled Microsoft Analysis Services DLLs (~20MB, no external download needed)
- 2 new Claude Code skills: Diagnostics and Partitions & Expressions (7 total)
- New commands: `trace start/stop/fetch/export`, `transaction begin/commit/rollback`, `calendar list/mark`, `expression list/get/create/delete`, `partition list/create/delete/refresh`, `advanced culture list/get`
- `connections last` command to show last-used connection
- `pbi connect` now auto-installs skills (no separate `pbi skills install` needed)
### Changed
- `pbi setup` now verifies pythonnet + bundled DLLs (no longer downloads a binary)
- Architecture: Click CLI -> tom_backend/adomd_backend -> pythonnet -> .NET TOM (in-process)
- All 7 skills updated to reflect v2 commands and architecture
- README rewritten for v2 architecture
### Removed
- MCP client/server architecture
- Binary manager and auto-download from VS Code Marketplace
- `$PBI_MCP_BINARY` environment variable
- `~/.pbi-cli/bin/` binary directory
## [1.0.6] - 2026-03-26
### Fixed

102
README.md
View file

@ -13,13 +13,13 @@ Install once, then just ask Claude to work with your semantic models.
## What is this?
pbi-cli gives **Claude Code** (and other AI agents) the ability to manage Power BI semantic models. It ships with 5 skills that Claude discovers automatically. You ask in plain English, Claude uses the right `pbi` commands.
pbi-cli gives **Claude Code** (and other AI agents) the ability to manage Power BI semantic models. It ships with 7 skills that Claude discovers automatically. You ask in plain English, Claude uses the right `pbi` commands.
```mermaid
graph LR
A["<b>You</b><br/>'Add a YTD measure<br/>to the Sales table'"] --> B["<b>Claude Code</b><br/>Uses Power BI skills"]
B --> C["<b>pbi-cli</b>"]
C --> D["<b>Power BI</b><br/>Desktop / Fabric"]
C --> D["<b>Power BI</b><br/>Desktop"]
style A fill:#1a1a2e,stroke:#f2c811,color:#fff
style B fill:#16213e,stroke:#4cc9f0,color:#fff
@ -41,14 +41,14 @@ Install and set up pbi-cli from https://github.com/MinaSaad1/pbi-cli.git
```bash
pipx install pbi-cli-tool # 1. Install (handles PATH automatically)
pbi connect # 2. Auto-detects Power BI Desktop, downloads binary, installs skills
pbi connect # 2. Auto-detects Power BI Desktop and installs skills
```
That's it. Open Power BI Desktop with a `.pbix` file, run `pbi connect`, and everything is set up automatically. Open Claude Code and start asking.
You can also specify the port manually: `pbi connect -d localhost:54321`
> **Requires:** Python 3.10+ and Power BI Desktop (local) or a Fabric workspace (cloud).
> **Requires:** Windows with Python 3.10+ and Power BI Desktop running.
<details>
<summary><b>Using pip instead of pipx?</b></summary>
@ -79,7 +79,7 @@ Then **restart your terminal**. We recommend `pipx` instead to avoid this entire
## Skills
After running `pbi skills install`, Claude Code discovers **5 Power BI skills**. Each skill teaches Claude a different area of Power BI development. You don't need to memorize commands. Just describe what you want.
After running `pbi connect`, Claude Code discovers **7 Power BI skills**. Each skill teaches Claude a different area of Power BI development. You don't need to memorize commands. Just describe what you want.
```mermaid
graph TD
@ -90,6 +90,8 @@ graph TD
SK --> S3["Deployment"]
SK --> S4["Security"]
SK --> S5["Documentation"]
SK --> S6["Diagnostics"]
SK --> S7["Partitions"]
style YOU fill:#1a1a2e,stroke:#f2c811,color:#fff
style CC fill:#16213e,stroke:#4cc9f0,color:#fff
@ -99,6 +101,8 @@ graph TD
style S3 fill:#1a1a2e,stroke:#7b61ff,color:#fff
style S4 fill:#1a1a2e,stroke:#06d6a0,color:#fff
style S5 fill:#1a1a2e,stroke:#ff6b6b,color:#fff
style S6 fill:#1a1a2e,stroke:#ffd166,color:#fff
style S7 fill:#1a1a2e,stroke:#a0c4ff,color:#fff
```
### Modeling
@ -144,9 +148,9 @@ TOPN(
### Deployment
> *"Export the model to Git and deploy it to the Staging workspace"*
> *"Export the model to Git for version control"*
Claude exports your model as TMDL files for version control, then imports them into another environment. Handles transactions for safe multi-step changes.
Claude exports your model as TMDL files for version control and imports them back. Handles transactions for safe multi-step changes.
<details>
<summary>Example: what Claude runs behind the scenes</summary>
@ -154,9 +158,7 @@ Claude exports your model as TMDL files for version control, then imports them i
```bash
pbi database export-tmdl ./model/
# ... you commit to git ...
pbi connect-fabric --workspace "Staging" --model "Sales Model"
pbi database import-tmdl ./model/
pbi model refresh --type Full
```
</details>
@ -164,7 +166,7 @@ pbi model refresh --type Full
> *"Set up row-level security so regional managers only see their region"*
Claude creates RLS roles with descriptions, sets up perspectives for different user groups, and exports role definitions for version control.
Claude creates RLS roles with descriptions, sets up perspectives for different user groups, and exports the model for version control.
<details>
<summary>Example: what Claude runs behind the scenes</summary>
@ -173,7 +175,7 @@ Claude creates RLS roles with descriptions, sets up perspectives for different u
pbi security-role create "Regional Manager" --description "Users see only their region's data"
pbi perspective create "Executive Dashboard"
pbi perspective create "Regional Detail"
pbi security-role export-tmdl "Regional Manager"
pbi database export-tmdl ./model-backup/
```
</details>
@ -196,20 +198,56 @@ pbi database export-tmdl ./model-docs/
```
</details>
### Diagnostics
> *"Why is this DAX query so slow?"*
Claude traces query execution, clears caches for clean benchmarks, checks model health, and verifies the environment.
<details>
<summary>Example: what Claude runs behind the scenes</summary>
```bash
pbi dax clear-cache
pbi trace start
pbi dax execute "EVALUATE SUMMARIZECOLUMNS(...)" --timeout 300
pbi trace stop
pbi trace export ./trace.json
```
</details>
### Partitions & Expressions
> *"Set up partitions for incremental refresh on the Sales table"*
Claude manages table partitions, shared M/Power Query expressions, and calendar table configuration.
<details>
<summary>Example: what Claude runs behind the scenes</summary>
```bash
pbi partition list --table Sales
pbi partition create "Sales_2024" --table Sales --expression "..." --mode Import
pbi expression create "ServerURL" --expression '"https://api.example.com"'
pbi calendar mark Calendar --date-column Date
```
</details>
---
## All Commands
22 command groups covering every Power BI MCP server operation. You rarely need these directly when using Claude Code, but they're available for scripting, CI/CD, or manual use.
22 command groups covering the full Power BI Tabular Object Model. You rarely need these directly when using Claude Code, but they're available for scripting, CI/CD, or manual use.
| Category | Commands |
|----------|----------|
| **Queries** | `dax execute`, `dax validate`, `dax clear-cache` |
| **Model** | `table`, `column`, `measure`, `relationship`, `hierarchy`, `calc-group` |
| **Deploy** | `database export-tmdl`, `database import-tmdl`, `database export-tmsl`, `model refresh`, `transaction` |
| **Deploy** | `database export-tmdl`, `database import-tmdl`, `database export-tmsl`, `transaction` |
| **Security** | `security-role`, `perspective` |
| **Connect** | `connect`, `connect-fabric`, `disconnect`, `connections list` |
| **Other** | `partition`, `expression`, `calendar`, `trace`, `advanced`, `model get`, `model stats` |
| **Connect** | `connect`, `disconnect`, `connections list`, `connections last` |
| **Data** | `partition`, `expression`, `calendar`, `advanced culture` |
| **Diagnostics** | `trace start`, `trace stop`, `trace fetch`, `trace export`, `model stats` |
| **Tools** | `setup`, `repl`, `skills install`, `skills list` |
Use `--json` for machine-readable output (for scripts and AI agents):
@ -225,7 +263,7 @@ Run `pbi <command> --help` for full options.
## REPL Mode
For interactive work, the REPL keeps the MCP server running between commands (skipping the ~2-3s startup each time):
For interactive work, the REPL keeps a persistent connection alive between commands:
```
$ pbi repl
@ -244,22 +282,23 @@ Tab completion, command history, and a dynamic prompt showing your active connec
## How It Works
pbi-cli wraps Microsoft's official Power BI MCP server binary behind a CLI. The binary is downloaded automatically on first `pbi connect` from the VS Code Marketplace.
pbi-cli connects directly to Power BI Desktop's Analysis Services engine via pythonnet and the .NET Tabular Object Model (TOM). No external binaries or MCP servers needed. Everything runs in-process for sub-second command execution.
```mermaid
graph TB
subgraph CLI["pbi-cli"]
A["CLI commands"] --> B["MCP client"]
subgraph CLI["pbi-cli (Python)"]
A["Click CLI"] --> B["tom_backend / adomd_backend"]
B --> C["pythonnet"]
end
B -->|"stdio"| C["Power BI MCP Server<br/>.NET binary"]
C -->|"XMLA"| D["Power BI Desktop<br/>or Fabric"]
C -->|"in-process .NET"| D["Bundled TOM DLLs"]
D -->|"XMLA"| E["Power BI Desktop<br/>msmdsrv.exe"]
style CLI fill:#16213e,stroke:#4cc9f0,color:#fff
style C fill:#0f3460,stroke:#7b61ff,color:#fff
style D fill:#1a1a2e,stroke:#f2c811,color:#fff
style D fill:#0f3460,stroke:#7b61ff,color:#fff
style E fill:#1a1a2e,stroke:#f2c811,color:#fff
```
**Why a CLI wrapper?** When an AI agent uses an MCP server directly, the tool schemas consume ~4,000+ tokens per tool in the context window. A `pbi` command costs ~30 tokens. Same capabilities, 100x less context.
**Why a CLI?** When an AI agent uses an MCP server directly, the tool schemas consume ~4,000+ tokens per tool in the context window. A `pbi` command costs ~30 tokens. Same capabilities, 100x less context.
<details>
<summary><b>Configuration details</b></summary>
@ -268,16 +307,17 @@ All config lives in `~/.pbi-cli/`:
```
~/.pbi-cli/
config.json # Binary version, path, args
config.json # Default connection preference
connections.json # Named connections
repl_history # REPL command history
bin/{version}/ # MCP server binary
```
Binary resolution order:
1. `$PBI_MCP_BINARY` env var (explicit override)
2. `~/.pbi-cli/bin/` (auto-downloaded on first connect)
3. VS Code extension fallback
Bundled DLLs ship inside the Python package (`pbi_cli/dlls/`):
- Microsoft.AnalysisServices.Tabular.dll
- Microsoft.AnalysisServices.AdomdClient.dll
- Microsoft.AnalysisServices.Core.dll
- Microsoft.AnalysisServices.Tabular.Json.dll
- Microsoft.AnalysisServices.dll
</details>
@ -294,7 +334,7 @@ pip install -e ".[dev]"
```bash
ruff check src/ tests/ # Lint
mypy src/ # Type check
pytest -m "not e2e" # Run 120 tests
pytest -m "not e2e" # Run tests
```
---

View file

@ -13,11 +13,11 @@ Install once, then just ask Claude to work with your semantic models.
## What is this?
pbi-cli gives **Claude Code** (and other AI agents) the ability to manage Power BI semantic models. It ships with 5 skills that Claude discovers automatically. You ask in plain English, Claude uses the right `pbi` commands.
pbi-cli gives **Claude Code** (and other AI agents) the ability to manage Power BI semantic models. It ships with 7 skills that Claude discovers automatically. You ask in plain English, Claude uses the right `pbi` commands.
```
You Claude Code pbi-cli Power BI
"Add a YTD measure ---> Uses Power BI ---> CLI commands ---> Desktop / Fabric
"Add a YTD measure ---> Uses Power BI ---> CLI commands ---> Desktop
to the Sales table" skills
```
@ -35,14 +35,14 @@ Install and set up pbi-cli from https://github.com/MinaSaad1/pbi-cli.git
```bash
pipx install pbi-cli-tool # 1. Install (handles PATH automatically)
pbi connect # 2. Auto-detects Power BI Desktop, downloads binary, installs skills
pbi connect # 2. Auto-detects Power BI Desktop and installs skills
```
That's it. Open Power BI Desktop with a `.pbix` file, run `pbi connect`, and everything is set up automatically. Open Claude Code and start asking.
You can also specify the port manually: `pbi connect -d localhost:54321`
> **Requires:** Python 3.10+ and Power BI Desktop (local) or a Fabric workspace (cloud).
> **Requires:** Windows with Python 3.10+ and Power BI Desktop running.
<details>
<summary><b>Using pip instead of pipx?</b></summary>
@ -73,7 +73,7 @@ Then **restart your terminal**. We recommend `pipx` instead to avoid this entire
## Skills
After running `pbi skills install`, Claude Code discovers **5 Power BI skills**. Each skill teaches Claude a different area of Power BI development. You don't need to memorize commands. Just describe what you want.
After running `pbi connect`, Claude Code discovers **7 Power BI skills**. Each skill teaches Claude a different area of Power BI development. You don't need to memorize commands. Just describe what you want.
```
You: "Set up RLS for regional managers"
@ -86,6 +86,8 @@ Claude Code --> Picks the right skill
+-- Deployment
+-- Security
+-- Documentation
+-- Diagnostics
+-- Partitions
```
### Modeling
@ -131,9 +133,9 @@ TOPN(
### Deployment
> *"Export the model to Git and deploy it to the Staging workspace"*
> *"Export the model to Git for version control"*
Claude exports your model as TMDL files for version control, then imports them into another environment. Handles transactions for safe multi-step changes.
Claude exports your model as TMDL files for version control and imports them back. Handles transactions for safe multi-step changes.
<details>
<summary>Example: what Claude runs behind the scenes</summary>
@ -141,9 +143,7 @@ Claude exports your model as TMDL files for version control, then imports them i
```bash
pbi database export-tmdl ./model/
# ... you commit to git ...
pbi connect-fabric --workspace "Staging" --model "Sales Model"
pbi database import-tmdl ./model/
pbi model refresh --type Full
```
</details>
@ -151,7 +151,7 @@ pbi model refresh --type Full
> *"Set up row-level security so regional managers only see their region"*
Claude creates RLS roles with descriptions, sets up perspectives for different user groups, and exports role definitions for version control.
Claude creates RLS roles with descriptions, sets up perspectives for different user groups, and exports the model for version control.
<details>
<summary>Example: what Claude runs behind the scenes</summary>
@ -160,7 +160,7 @@ Claude creates RLS roles with descriptions, sets up perspectives for different u
pbi security-role create "Regional Manager" --description "Users see only their region's data"
pbi perspective create "Executive Dashboard"
pbi perspective create "Regional Detail"
pbi security-role export-tmdl "Regional Manager"
pbi database export-tmdl ./model-backup/
```
</details>
@ -183,20 +183,56 @@ pbi database export-tmdl ./model-docs/
```
</details>
### Diagnostics
> *"Why is this DAX query so slow?"*
Claude traces query execution, clears caches for clean benchmarks, checks model health, and verifies the environment.
<details>
<summary>Example: what Claude runs behind the scenes</summary>
```bash
pbi dax clear-cache
pbi trace start
pbi dax execute "EVALUATE SUMMARIZECOLUMNS(...)" --timeout 300
pbi trace stop
pbi trace export ./trace.json
```
</details>
### Partitions & Expressions
> *"Set up partitions for incremental refresh on the Sales table"*
Claude manages table partitions, shared M/Power Query expressions, and calendar table configuration.
<details>
<summary>Example: what Claude runs behind the scenes</summary>
```bash
pbi partition list --table Sales
pbi partition create "Sales_2024" --table Sales --expression "..." --mode Import
pbi expression create "ServerURL" --expression '"https://api.example.com"'
pbi calendar mark Calendar --date-column Date
```
</details>
---
## All Commands
22 command groups covering every Power BI MCP server operation. You rarely need these directly when using Claude Code, but they're available for scripting, CI/CD, or manual use.
22 command groups covering the full Power BI Tabular Object Model. You rarely need these directly when using Claude Code, but they're available for scripting, CI/CD, or manual use.
| Category | Commands |
|----------|----------|
| **Queries** | `dax execute`, `dax validate`, `dax clear-cache` |
| **Model** | `table`, `column`, `measure`, `relationship`, `hierarchy`, `calc-group` |
| **Deploy** | `database export-tmdl`, `database import-tmdl`, `database export-tmsl`, `model refresh`, `transaction` |
| **Deploy** | `database export-tmdl`, `database import-tmdl`, `database export-tmsl`, `transaction` |
| **Security** | `security-role`, `perspective` |
| **Connect** | `connect`, `connect-fabric`, `disconnect`, `connections list` |
| **Other** | `partition`, `expression`, `calendar`, `trace`, `advanced`, `model get`, `model stats` |
| **Connect** | `connect`, `disconnect`, `connections list`, `connections last` |
| **Data** | `partition`, `expression`, `calendar`, `advanced culture` |
| **Diagnostics** | `trace start`, `trace stop`, `trace fetch`, `trace export`, `model stats` |
| **Tools** | `setup`, `repl`, `skills install`, `skills list` |
Use `--json` for machine-readable output (for scripts and AI agents):
@ -212,7 +248,7 @@ Run `pbi <command> --help` for full options.
## REPL Mode
For interactive work, the REPL keeps the MCP server running between commands (skipping the ~2-3s startup each time):
For interactive work, the REPL keeps a persistent connection alive between commands:
```
$ pbi repl
@ -231,17 +267,17 @@ Tab completion, command history, and a dynamic prompt showing your active connec
## How It Works
pbi-cli wraps Microsoft's official Power BI MCP server binary behind a CLI. The binary is downloaded automatically on first `pbi connect` from the VS Code Marketplace.
pbi-cli connects directly to Power BI Desktop's Analysis Services engine via pythonnet and the .NET Tabular Object Model (TOM). No external binaries or MCP servers needed. Everything runs in-process for sub-second command execution.
```
+------------------+ +----------------------+ +------------------+
| pbi-cli | | Power BI MCP Server | | Power BI |
| CLI commands -->-| stdio | (.NET binary) | XMLA | Desktop/Fabric |
| MCP client |-------->| |-------->| |
+------------------+ +----------------------+ +------------------+
+------------------+ +---------------------+ +------------------+
| pbi-cli | | Bundled TOM DLLs | | Power BI |
| (Python CLI) | pythnet | (.NET in-process) | XMLA | Desktop |
| Click commands |-------->| TOM / ADOMD.NET |-------->| msmdsrv.exe |
+------------------+ +---------------------+ +------------------+
```
**Why a CLI wrapper?** When an AI agent uses an MCP server directly, the tool schemas consume ~4,000+ tokens per tool in the context window. A `pbi` command costs ~30 tokens. Same capabilities, 100x less context.
**Why a CLI?** When an AI agent uses an MCP server directly, the tool schemas consume ~4,000+ tokens per tool in the context window. A `pbi` command costs ~30 tokens. Same capabilities, 100x less context.
<details>
<summary><b>Configuration details</b></summary>
@ -250,16 +286,17 @@ All config lives in `~/.pbi-cli/`:
```
~/.pbi-cli/
config.json # Binary version, path, args
config.json # Default connection preference
connections.json # Named connections
repl_history # REPL command history
bin/{version}/ # MCP server binary
```
Binary resolution order:
1. `$PBI_MCP_BINARY` env var (explicit override)
2. `~/.pbi-cli/bin/` (auto-downloaded on first connect)
3. VS Code extension fallback
Bundled DLLs ship inside the Python package (`pbi_cli/dlls/`):
- Microsoft.AnalysisServices.Tabular.dll
- Microsoft.AnalysisServices.AdomdClient.dll
- Microsoft.AnalysisServices.Core.dll
- Microsoft.AnalysisServices.Tabular.Json.dll
- Microsoft.AnalysisServices.dll
</details>
@ -276,7 +313,7 @@ pip install -e ".[dev]"
```bash
ruff check src/ tests/ # Lint
mypy src/ # Type check
pytest -m "not e2e" # Run 120 tests
pytest -m "not e2e" # Run tests
```
---

View file

@ -4,15 +4,15 @@ build-backend = "setuptools.build_meta"
[project]
name = "pbi-cli-tool"
version = "1.0.6"
description = "CLI for Power BI semantic models - wraps the Power BI MCP server for token-efficient AI agent usage"
version = "2.0.0"
description = "CLI for Power BI semantic models - direct .NET connection for token-efficient AI agent usage"
readme = "README.pypi.md"
license = {text = "MIT"}
requires-python = ">=3.10"
authors = [
{name = "pbi-cli contributors"},
]
keywords = ["power-bi", "cli", "mcp", "semantic-model", "dax", "claude-code"]
keywords = ["power-bi", "cli", "semantic-model", "dax", "claude-code", "tom"]
classifiers = [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
@ -29,10 +29,10 @@ classifiers = [
]
dependencies = [
"click>=8.0.0",
"mcp>=1.20.0",
"rich>=13.0.0",
"httpx>=0.24.0",
"prompt-toolkit>=3.0.0",
"pythonnet==3.1.0rc0",
"clr-loader>=0.2.6",
]
[project.scripts]
@ -47,7 +47,6 @@ Issues = "https://github.com/MinaSaad1/pbi-cli/issues"
dev = [
"pytest>=7.0",
"pytest-cov>=4.0",
"pytest-asyncio>=0.21",
"ruff>=0.4.0",
"mypy>=1.10",
]
@ -57,6 +56,7 @@ where = ["src"]
[tool.setuptools.package-data]
"pbi_cli.skills" = ["**/*.md"]
"pbi_cli.dlls" = ["*.dll"]
[tool.ruff]
target-version = "py310"
@ -65,10 +65,19 @@ line-length = 100
[tool.ruff.lint]
select = ["E", "F", "I", "N", "W", "UP"]
[tool.ruff.lint.per-file-ignores]
# .NET interop code uses CamelCase names to match the .NET API surface
"src/pbi_cli/core/adomd_backend.py" = ["N806"]
"src/pbi_cli/core/session.py" = ["N806"]
"src/pbi_cli/core/tom_backend.py" = ["N806", "N814"]
"src/pbi_cli/core/dotnet_loader.py" = ["N806", "N814"]
# Mock objects mirror .NET CamelCase API
"tests/conftest.py" = ["N802", "N806"]
[tool.pytest.ini_options]
testpaths = ["tests"]
markers = [
"e2e: end-to-end tests requiring real Power BI binary",
"e2e: end-to-end tests requiring running Power BI Desktop",
]
[tool.mypy]

View file

@ -1,3 +1,3 @@
"""pbi-cli: CLI for Power BI semantic models via MCP server."""
"""pbi-cli: CLI for Power BI semantic models via direct .NET interop."""
__version__ = "1.0.6"
__version__ = "2.0.0"

View file

@ -2,100 +2,35 @@
from __future__ import annotations
from typing import Any
from collections.abc import Callable
from typing import TYPE_CHECKING, Any
from pbi_cli.core.errors import McpToolError
from pbi_cli.core.mcp_client import PbiMcpClient, get_client
from pbi_cli.core.output import format_mcp_result, print_error
from pbi_cli.main import PbiContext
from pbi_cli.core.errors import TomError
from pbi_cli.core.output import format_result, print_error
if TYPE_CHECKING:
from pbi_cli.main import PbiContext
def resolve_connection_name(ctx: PbiContext) -> str | None:
"""Return the connection name from --connection flag or last-used store."""
if ctx.connection:
return ctx.connection
from pbi_cli.core.connection_store import load_connections
store = load_connections()
return store.last_used or None
def _auto_reconnect(client: PbiMcpClient, ctx: PbiContext) -> str | None:
"""Re-establish the saved connection on a fresh MCP server process.
Each non-REPL command starts a new MCP server, so the connection
must be re-established before running any tool that needs one.
Returns the connection name, or None if no saved connection exists.
"""
from pbi_cli.core.connection_store import (
get_active_connection,
load_connections,
)
store = load_connections()
conn = get_active_connection(store, override=ctx.connection)
if conn is None:
return None
# Build the appropriate Connect request
if conn.workspace_name:
request: dict[str, object] = {
"operation": "ConnectFabric",
"workspaceName": conn.workspace_name,
"semanticModelName": conn.semantic_model_name,
"tenantName": conn.tenant_name,
}
else:
request = {
"operation": "Connect",
"dataSource": conn.data_source,
}
if conn.initial_catalog:
request["initialCatalog"] = conn.initial_catalog
if conn.connection_string:
request["connectionString"] = conn.connection_string
result = client.call_tool("connection_operations", request)
# Use server-assigned connection name (e.g. "PBIDesktop-demo-57947")
# instead of our locally saved name (e.g. "localhost-57947")
server_name = None
if isinstance(result, dict):
server_name = result.get("connectionName") or result.get("ConnectionName")
return server_name or conn.name
def run_tool(
def run_command(
ctx: PbiContext,
tool_name: str,
request: dict[str, Any],
fn: Callable[..., Any],
**kwargs: Any,
) -> Any:
"""Execute an MCP tool call with standard error handling.
"""Execute a backend function with standard error handling.
In non-REPL mode, automatically re-establishes the saved connection
before running the tool (each invocation starts a fresh MCP server).
Formats output based on --json flag. Returns the result or exits on error.
Calls ``fn(**kwargs)`` and formats the output based on the
``--json`` flag. Returns the result or exits on error.
"""
client = get_client(repl_mode=ctx.repl_mode)
try:
# In non-REPL mode, re-establish connection on the fresh server
if not ctx.repl_mode:
conn_name = _auto_reconnect(client, ctx)
else:
conn_name = resolve_connection_name(ctx)
if conn_name:
request.setdefault("connectionName", conn_name)
result = client.call_tool(tool_name, request)
format_mcp_result(result, ctx.json_output)
result = fn(**kwargs)
format_result(result, ctx.json_output)
return result
except Exception as e:
print_error(str(e))
raise McpToolError(tool_name, str(e))
finally:
if not ctx.repl_mode:
client.stop()
raise SystemExit(1)
raise TomError(fn.__name__, str(e))
def build_definition(

View file

@ -1,16 +1,16 @@
"""Less common operations: culture, translation, function, query-group."""
"""Less common operations: culture, translation."""
from __future__ import annotations
import click
from pbi_cli.commands._helpers import build_definition, run_tool
from pbi_cli.commands._helpers import run_command
from pbi_cli.main import PbiContext, pass_context
@click.group()
def advanced() -> None:
"""Advanced operations: cultures, translations, functions, query groups."""
"""Advanced operations: cultures, translations."""
# --- Culture ---
@ -25,7 +25,11 @@ def culture() -> None:
@pass_context
def culture_list(ctx: PbiContext) -> None:
"""List cultures."""
run_tool(ctx, "culture_operations", {"operation": "List"})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import culture_list as _culture_list
session = get_session_for_command(ctx)
run_command(ctx, _culture_list, model=session.model)
@culture.command()
@ -33,7 +37,11 @@ def culture_list(ctx: PbiContext) -> None:
@pass_context
def culture_create(ctx: PbiContext, name: str) -> None:
"""Create a culture."""
run_tool(ctx, "culture_operations", {"operation": "Create", "definitions": [{"name": name}]})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import culture_create as _culture_create
session = get_session_for_command(ctx)
run_command(ctx, _culture_create, model=session.model, name=name)
@culture.command(name="delete")
@ -41,110 +49,8 @@ def culture_create(ctx: PbiContext, name: str) -> None:
@pass_context
def culture_delete(ctx: PbiContext, name: str) -> None:
"""Delete a culture."""
run_tool(ctx, "culture_operations", {"operation": "Delete", "name": name})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import culture_delete as _culture_delete
# --- Translation ---
@advanced.group()
def translation() -> None:
"""Manage object translations."""
@translation.command(name="list")
@click.option("--culture", "-c", required=True, help="Culture name.")
@pass_context
def translation_list(ctx: PbiContext, culture: str) -> None:
"""List translations for a culture."""
run_tool(ctx, "object_translation_operations", {"operation": "List", "cultureName": culture})
@translation.command()
@click.option("--culture", "-c", required=True, help="Culture name.")
@click.option("--object-name", required=True, help="Object to translate.")
@click.option("--table", "-t", default=None, help="Table name (if translating table object).")
@click.option("--translated-caption", default=None, help="Translated caption.")
@click.option("--translated-description", default=None, help="Translated description.")
@pass_context
def create(
ctx: PbiContext,
culture: str,
object_name: str,
table: str | None,
translated_caption: str | None,
translated_description: str | None,
) -> None:
"""Create an object translation."""
definition = build_definition(
required={"objectName": object_name, "cultureName": culture},
optional={
"tableName": table,
"translatedCaption": translated_caption,
"translatedDescription": translated_description,
},
)
run_tool(
ctx,
"object_translation_operations",
{
"operation": "Create",
"definitions": [definition],
},
)
# --- Function ---
@advanced.group()
def function() -> None:
"""Manage model functions."""
@function.command(name="list")
@pass_context
def function_list(ctx: PbiContext) -> None:
"""List functions."""
run_tool(ctx, "function_operations", {"operation": "List"})
@function.command()
@click.argument("name")
@click.option("--expression", "-e", required=True, help="Function expression.")
@pass_context
def function_create(ctx: PbiContext, name: str, expression: str) -> None:
"""Create a function."""
run_tool(
ctx,
"function_operations",
{
"operation": "Create",
"definitions": [{"name": name, "expression": expression}],
},
)
# --- Query Group ---
@advanced.group(name="query-group")
def query_group() -> None:
"""Manage query groups."""
@query_group.command(name="list")
@pass_context
def qg_list(ctx: PbiContext) -> None:
"""List query groups."""
run_tool(ctx, "query_group_operations", {"operation": "List"})
@query_group.command()
@click.argument("name")
@click.option("--folder", default=None, help="Folder path.")
@pass_context
def qg_create(ctx: PbiContext, name: str, folder: str | None) -> None:
"""Create a query group."""
definition = build_definition(required={"name": name}, optional={"folder": folder})
run_tool(ctx, "query_group_operations", {"operation": "Create", "definitions": [definition]})
session = get_session_for_command(ctx)
run_command(ctx, _culture_delete, model=session.model, name=name)

View file

@ -4,7 +4,7 @@ from __future__ import annotations
import click
from pbi_cli.commands._helpers import build_definition, run_tool
from pbi_cli.commands._helpers import run_command
from pbi_cli.main import PbiContext, pass_context
@ -17,7 +17,11 @@ def calc_group() -> None:
@pass_context
def cg_list(ctx: PbiContext) -> None:
"""List all calculation groups."""
run_tool(ctx, "calculation_group_operations", {"operation": "ListGroups"})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import calc_group_list
session = get_session_for_command(ctx)
run_command(ctx, calc_group_list, model=session.model)
@calc_group.command()
@ -27,17 +31,17 @@ def cg_list(ctx: PbiContext) -> None:
@pass_context
def create(ctx: PbiContext, name: str, description: str | None, precedence: int | None) -> None:
"""Create a calculation group."""
definition = build_definition(
required={"name": name},
optional={"description": description, "calculationGroupPrecedence": precedence},
)
run_tool(
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import calc_group_create
session = get_session_for_command(ctx)
run_command(
ctx,
"calculation_group_operations",
{
"operation": "CreateGroup",
"definitions": [definition],
},
calc_group_create,
model=session.model,
name=name,
description=description,
precedence=precedence,
)
@ -46,7 +50,11 @@ def create(ctx: PbiContext, name: str, description: str | None, precedence: int
@pass_context
def delete(ctx: PbiContext, name: str) -> None:
"""Delete a calculation group."""
run_tool(ctx, "calculation_group_operations", {"operation": "DeleteGroup", "name": name})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import calc_group_delete
session = get_session_for_command(ctx)
run_command(ctx, calc_group_delete, model=session.model, name=name)
@calc_group.command(name="items")
@ -54,14 +62,11 @@ def delete(ctx: PbiContext, name: str) -> None:
@pass_context
def list_items(ctx: PbiContext, group_name: str) -> None:
"""List calculation items in a group."""
run_tool(
ctx,
"calculation_group_operations",
{
"operation": "ListItems",
"calculationGroupName": group_name,
},
)
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import calc_item_list
session = get_session_for_command(ctx)
run_command(ctx, calc_item_list, model=session.model, group_name=group_name)
@calc_group.command(name="create-item")
@ -74,16 +79,16 @@ def create_item(
ctx: PbiContext, item_name: str, group: str, expression: str, ordinal: int | None
) -> None:
"""Create a calculation item in a group."""
definition = build_definition(
required={"name": item_name, "expression": expression},
optional={"ordinal": ordinal},
)
run_tool(
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import calc_item_create
session = get_session_for_command(ctx)
run_command(
ctx,
"calculation_group_operations",
{
"operation": "CreateItem",
"calculationGroupName": group,
"definitions": [definition],
},
calc_item_create,
model=session.model,
group_name=group,
name=item_name,
expression=expression,
ordinal=ordinal,
)

View file

@ -4,7 +4,7 @@ from __future__ import annotations
import click
from pbi_cli.commands._helpers import build_definition, run_tool
from pbi_cli.commands._helpers import run_command
from pbi_cli.main import PbiContext, pass_context
@ -16,27 +16,41 @@ def calendar() -> None:
@calendar.command(name="list")
@pass_context
def calendar_list(ctx: PbiContext) -> None:
"""List calendar tables."""
run_tool(ctx, "calendar_operations", {"operation": "List"})
"""List calendar/date tables (tables with DataCategory = Time)."""
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import _safe_str
session = get_session_for_command(ctx)
# Filter to tables marked as date/calendar tables
# Check DataCategory on the actual TOM objects
results = []
for table in session.model.Tables:
cat = _safe_str(table.DataCategory)
if cat.lower() in ("time", "date"):
results.append({
"name": str(table.Name),
"dataCategory": cat,
"columns": table.Columns.Count,
})
from pbi_cli.core.output import format_result
format_result(results, ctx.json_output)
@calendar.command()
@calendar.command(name="mark")
@click.argument("name")
@click.option("--table", "-t", required=True, help="Target table name.")
@click.option("--description", default=None, help="Calendar description.")
@click.option("--date-column", required=True, help="Date column to use as key.")
@pass_context
def create(ctx: PbiContext, name: str, table: str, description: str | None) -> None:
"""Create a calendar table."""
definition = build_definition(
required={"name": name, "tableName": table},
optional={"description": description},
def mark(ctx: PbiContext, name: str, date_column: str) -> None:
"""Mark a table as a calendar/date table."""
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import table_mark_as_date
session = get_session_for_command(ctx)
run_command(
ctx,
table_mark_as_date,
model=session.model,
table_name=name,
date_column=date_column,
)
run_tool(ctx, "calendar_operations", {"operation": "Create", "definitions": [definition]})
@calendar.command()
@click.argument("name")
@pass_context
def delete(ctx: PbiContext, name: str) -> None:
"""Delete a calendar."""
run_tool(ctx, "calendar_operations", {"operation": "Delete", "name": name})

View file

@ -4,7 +4,7 @@ from __future__ import annotations
import click
from pbi_cli.commands._helpers import build_definition, run_tool
from pbi_cli.commands._helpers import run_command
from pbi_cli.main import PbiContext, pass_context
@ -18,7 +18,11 @@ def column() -> None:
@pass_context
def column_list(ctx: PbiContext, table: str) -> None:
"""List all columns in a table."""
run_tool(ctx, "column_operations", {"operation": "List", "tableName": table})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import column_list as _column_list
session = get_session_for_command(ctx)
run_command(ctx, _column_list, model=session.model, table_name=table)
@column.command()
@ -27,7 +31,11 @@ def column_list(ctx: PbiContext, table: str) -> None:
@pass_context
def get(ctx: PbiContext, name: str, table: str) -> None:
"""Get details of a specific column."""
run_tool(ctx, "column_operations", {"operation": "Get", "name": name, "tableName": table})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import column_get
session = get_session_for_command(ctx)
run_command(ctx, column_get, model=session.model, table_name=table, column_name=name)
@column.command()
@ -58,19 +66,25 @@ def create(
is_key: bool,
) -> None:
"""Create a new column."""
definition = build_definition(
required={"name": name, "tableName": table, "dataType": data_type},
optional={
"sourceColumn": source_column,
"expression": expression,
"formatString": format_string,
"description": description,
"displayFolder": folder,
"isHidden": hidden if hidden else None,
"isKey": is_key if is_key else None,
},
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import column_create
session = get_session_for_command(ctx)
run_command(
ctx,
column_create,
model=session.model,
table_name=table,
name=name,
data_type=data_type,
source_column=source_column,
expression=expression,
format_string=format_string,
description=description,
display_folder=folder,
is_hidden=hidden,
is_key=is_key,
)
run_tool(ctx, "column_operations", {"operation": "Create", "definitions": [definition]})
@column.command()
@ -79,7 +93,11 @@ def create(
@pass_context
def delete(ctx: PbiContext, name: str, table: str) -> None:
"""Delete a column."""
run_tool(ctx, "column_operations", {"operation": "Delete", "name": name, "tableName": table})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import column_delete
session = get_session_for_command(ctx)
run_command(ctx, column_delete, model=session.model, table_name=table, column_name=name)
@column.command()
@ -89,30 +107,15 @@ def delete(ctx: PbiContext, name: str, table: str) -> None:
@pass_context
def rename(ctx: PbiContext, old_name: str, new_name: str, table: str) -> None:
"""Rename a column."""
run_tool(
ctx,
"column_operations",
{
"operation": "Rename",
"name": old_name,
"newName": new_name,
"tableName": table,
},
)
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import column_rename
@column.command(name="export-tmdl")
@click.argument("name")
@click.option("--table", "-t", required=True, help="Table name.")
@pass_context
def export_tmdl(ctx: PbiContext, name: str, table: str) -> None:
"""Export a column as TMDL."""
run_tool(
session = get_session_for_command(ctx)
run_command(
ctx,
"column_operations",
{
"operation": "ExportTMDL",
"name": name,
"tableName": table,
},
column_rename,
model=session.model,
table_name=table,
old_name=old_name,
new_name=new_name,
)

View file

@ -12,9 +12,9 @@ from pbi_cli.core.connection_store import (
remove_connection,
save_connections,
)
from pbi_cli.core.mcp_client import get_client
from pbi_cli.core.output import (
print_error,
print_info,
print_json,
print_success,
print_table,
@ -33,16 +33,12 @@ from pbi_cli.main import PbiContext, pass_context
@click.option(
"--name", "-n", default=None, help="Name for this connection (auto-generated if omitted)."
)
@click.option(
"--connection-string", default="", help="Full connection string (overrides data-source)."
)
@pass_context
def connect(
ctx: PbiContext,
data_source: str | None,
catalog: str,
name: str | None,
connection_string: str,
) -> None:
"""Connect to a Power BI instance via data source.
@ -53,98 +49,34 @@ def connect(
if data_source is None:
data_source = _auto_discover_data_source()
conn_name = name or _auto_name(data_source)
from pbi_cli.core.session import connect as session_connect
request: dict[str, object] = {
"operation": "Connect",
"dataSource": data_source,
}
if catalog:
request["initialCatalog"] = catalog
if connection_string:
request["connectionString"] = connection_string
repl = ctx.repl_mode
client = get_client(repl_mode=repl)
try:
result = client.call_tool("connection_operations", request)
session = session_connect(data_source, catalog)
# Use server-returned connectionName if available, otherwise our local name
server_name = _extract_connection_name(result)
effective_name = server_name or conn_name
# Use provided name, or the session's auto-generated name
effective_name = name or session.connection_name
info = ConnectionInfo(
name=effective_name,
data_source=data_source,
initial_catalog=catalog,
connection_string=connection_string,
)
store = load_connections()
store = add_connection(store, info)
save_connections(store)
if ctx.json_output:
print_json({"connection": effective_name, "status": "connected", "result": result})
print_json({
"connection": effective_name,
"status": "connected",
"dataSource": data_source,
})
else:
print_success(f"Connected: {effective_name} ({data_source})")
except Exception as e:
print_error(f"Connection failed: {e}")
raise SystemExit(1)
finally:
if not repl:
client.stop()
@click.command(name="connect-fabric")
@click.option("--workspace", "-w", required=True, help="Fabric workspace name (exact match).")
@click.option("--model", "-m", required=True, help="Semantic model name (exact match).")
@click.option("--name", "-n", default=None, help="Name for this connection.")
@click.option("--tenant", default="myorg", help="Tenant name for B2B scenarios.")
@pass_context
def connect_fabric(
ctx: PbiContext, workspace: str, model: str, name: str | None, tenant: str
) -> None:
"""Connect to a Fabric workspace semantic model."""
_ensure_ready()
conn_name = name or f"{workspace}/{model}"
request: dict[str, object] = {
"operation": "ConnectFabric",
"workspaceName": workspace,
"semanticModelName": model,
"tenantName": tenant,
}
repl = ctx.repl_mode
client = get_client(repl_mode=repl)
try:
result = client.call_tool("connection_operations", request)
server_name = _extract_connection_name(result)
effective_name = server_name or conn_name
info = ConnectionInfo(
name=effective_name,
data_source=f"powerbi://api.powerbi.com/v1.0/{tenant}/{workspace}",
workspace_name=workspace,
semantic_model_name=model,
tenant_name=tenant,
)
store = load_connections()
store = add_connection(store, info)
save_connections(store)
if ctx.json_output:
print_json({"connection": effective_name, "status": "connected", "result": result})
else:
print_success(f"Connected to Fabric: {workspace}/{model}")
except Exception as e:
print_error(f"Fabric connection failed: {e}")
raise SystemExit(1)
finally:
if not repl:
client.stop()
@click.command()
@ -154,6 +86,8 @@ def connect_fabric(
@pass_context
def disconnect(ctx: PbiContext, name: str | None) -> None:
"""Disconnect from the active or named connection."""
from pbi_cli.core.session import disconnect as session_disconnect
store = load_connections()
target = name or store.last_used
@ -161,30 +95,15 @@ def disconnect(ctx: PbiContext, name: str | None) -> None:
print_error("No active connection to disconnect.")
raise SystemExit(1)
repl = ctx.repl_mode
client = get_client(repl_mode=repl)
try:
client.call_tool(
"connection_operations",
{
"operation": "Disconnect",
"connectionName": target,
},
)
session_disconnect()
store = remove_connection(store, target)
save_connections(store)
store = remove_connection(store, target)
save_connections(store)
if ctx.json_output:
print_json({"connection": target, "status": "disconnected"})
else:
print_success(f"Disconnected: {target}")
except Exception as e:
print_error(f"Disconnect failed: {e}")
raise SystemExit(1)
finally:
if not repl:
client.stop()
if ctx.json_output:
print_json({"connection": target, "status": "disconnected"})
else:
print_success(f"Disconnected: {target}")
@click.group()
@ -249,11 +168,7 @@ def connections_last(ctx: PbiContext) -> None:
def _auto_discover_data_source() -> str:
"""Auto-detect a running Power BI Desktop instance.
Raises click.ClickException if no instance is found.
"""
from pbi_cli.core.output import print_info
"""Auto-detect a running Power BI Desktop instance."""
from pbi_cli.utils.platform import discover_pbi_port
port = discover_pbi_port()
@ -270,30 +185,17 @@ def _auto_discover_data_source() -> str:
def _ensure_ready() -> None:
"""Auto-setup binary and skills if not already done.
"""Auto-install skills if not already done.
Lets users go straight from install to connect in one step:
pipx install pbi-cli-tool
pbi connect -d localhost:54321
"""
from pbi_cli.core.binary_manager import resolve_binary
try:
resolve_binary()
except FileNotFoundError:
from pbi_cli.core.binary_manager import download_and_extract
from pbi_cli.core.output import print_info
print_info("MCP binary not found. Running first-time setup...")
download_and_extract()
from pbi_cli.commands.skills_cmd import SKILLS_TARGET_DIR, _get_bundled_skills
bundled = _get_bundled_skills()
any_missing = any(not (SKILLS_TARGET_DIR / name / "SKILL.md").exists() for name in bundled)
if bundled and any_missing:
from pbi_cli.core.output import print_info
print_info("Installing Claude Code skills...")
for name, source in sorted(bundled.items()):
target_dir = SKILLS_TARGET_DIR / name
@ -304,16 +206,3 @@ def _ensure_ready() -> None:
target_file = target_dir / "SKILL.md"
target_file.write_text(source_file.read_text(encoding="utf-8"), encoding="utf-8")
print_info("Skills installed.")
def _extract_connection_name(result: object) -> str | None:
"""Extract connectionName from MCP server response, if present."""
if isinstance(result, dict):
return result.get("connectionName") or result.get("ConnectionName")
return None
def _auto_name(data_source: str) -> str:
"""Generate a connection name from a data source string."""
cleaned = data_source.replace("://", "-").replace("/", "-").replace(":", "-")
return cleaned[:50]

View file

@ -1,10 +1,10 @@
"""Database-level operations: list, TMDL import/export, Fabric deploy."""
"""Database-level operations: list, TMDL import/export, TMSL export."""
from __future__ import annotations
import click
from pbi_cli.commands._helpers import run_tool
from pbi_cli.commands._helpers import run_command
from pbi_cli.main import PbiContext, pass_context
@ -17,7 +17,11 @@ def database() -> None:
@pass_context
def database_list(ctx: PbiContext) -> None:
"""List all databases on the connected server."""
run_tool(ctx, "database_operations", {"operation": "List"})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import database_list as _database_list
session = get_session_for_command(ctx)
run_command(ctx, _database_list, server=session.server)
@database.command(name="import-tmdl")
@ -25,14 +29,11 @@ def database_list(ctx: PbiContext) -> None:
@pass_context
def import_tmdl(ctx: PbiContext, folder_path: str) -> None:
"""Import a model from a TMDL folder."""
run_tool(
ctx,
"database_operations",
{
"operation": "ImportFromTmdlFolder",
"tmdlFolderPath": folder_path,
},
)
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import import_tmdl as _import_tmdl
session = get_session_for_command(ctx)
run_command(ctx, _import_tmdl, server=session.server, folder_path=folder_path)
@database.command(name="export-tmdl")
@ -40,41 +41,19 @@ def import_tmdl(ctx: PbiContext, folder_path: str) -> None:
@pass_context
def export_tmdl(ctx: PbiContext, folder_path: str) -> None:
"""Export the model to a TMDL folder."""
run_tool(
ctx,
"database_operations",
{
"operation": "ExportToTmdlFolder",
"tmdlFolderPath": folder_path,
},
)
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import export_tmdl as _export_tmdl
session = get_session_for_command(ctx)
run_command(ctx, _export_tmdl, database=session.database, folder_path=folder_path)
@database.command(name="export-tmsl")
@pass_context
def export_tmsl(ctx: PbiContext) -> None:
"""Export the model as TMSL."""
run_tool(ctx, "database_operations", {"operation": "ExportTMSL"})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import export_tmsl as _export_tmsl
@database.command()
@click.option("--workspace", "-w", required=True, help="Target Fabric workspace name.")
@click.option("--new-name", default=None, help="New database name in target workspace.")
@click.option("--tenant", default=None, help="Tenant name for B2B scenarios.")
@pass_context
def deploy(ctx: PbiContext, workspace: str, new_name: str | None, tenant: str | None) -> None:
"""Deploy the model to a Fabric workspace."""
deploy_request: dict[str, object] = {"targetWorkspaceName": workspace}
if new_name:
deploy_request["newDatabaseName"] = new_name
if tenant:
deploy_request["targetTenantName"] = tenant
run_tool(
ctx,
"database_operations",
{
"operation": "DeployToFabric",
"deployToFabricRequest": deploy_request,
},
)
session = get_session_for_command(ctx)
run_command(ctx, _export_tmsl, database=session.database)

View file

@ -6,9 +6,8 @@ import sys
import click
from pbi_cli.commands._helpers import _auto_reconnect, resolve_connection_name
from pbi_cli.core.mcp_client import get_client
from pbi_cli.core.output import format_mcp_result, print_error
from pbi_cli.commands._helpers import run_command
from pbi_cli.core.output import print_error
from pbi_cli.main import PbiContext, pass_context
@ -23,10 +22,6 @@ def dax() -> None:
"--file", "-f", "query_file", type=click.Path(exists=True), help="Read query from file."
)
@click.option("--max-rows", type=int, default=None, help="Maximum rows to return.")
@click.option("--metrics", is_flag=True, default=False, help="Include execution metrics.")
@click.option(
"--metrics-only", is_flag=True, default=False, help="Return metrics without row data."
)
@click.option("--timeout", type=int, default=200, help="Query timeout in seconds.")
@pass_context
def execute(
@ -34,8 +29,6 @@ def execute(
query: str,
query_file: str | None,
max_rows: int | None,
metrics: bool,
metrics_only: bool,
timeout: int,
) -> None:
"""Execute a DAX query.
@ -53,33 +46,18 @@ def execute(
print_error("No query provided. Pass as argument, --file, or stdin.")
raise SystemExit(1)
request: dict[str, object] = {
"operation": "Execute",
"query": resolved_query,
"timeoutSeconds": timeout,
"getExecutionMetrics": metrics or metrics_only,
"executionMetricsOnly": metrics_only,
}
if max_rows is not None:
request["maxRows"] = max_rows
from pbi_cli.core.adomd_backend import execute_dax
from pbi_cli.core.session import get_session_for_command
client = get_client()
try:
if not ctx.repl_mode:
conn_name = _auto_reconnect(client, ctx)
else:
conn_name = resolve_connection_name(ctx)
if conn_name:
request["connectionName"] = conn_name
result = client.call_tool("dax_query_operations", request)
format_mcp_result(result, ctx.json_output)
except Exception as e:
print_error(f"DAX execution failed: {e}")
raise SystemExit(1)
finally:
if not ctx.repl_mode:
client.stop()
session = get_session_for_command(ctx)
run_command(
ctx,
execute_dax,
adomd_connection=session.adomd_connection,
query=resolved_query,
max_rows=max_rows,
timeout=timeout,
)
@dax.command()
@ -96,54 +74,29 @@ def validate(ctx: PbiContext, query: str, query_file: str | None, timeout: int)
print_error("No query provided.")
raise SystemExit(1)
request: dict[str, object] = {
"operation": "Validate",
"query": resolved_query,
"timeoutSeconds": timeout,
}
from pbi_cli.core.adomd_backend import validate_dax
from pbi_cli.core.session import get_session_for_command
client = get_client()
try:
if not ctx.repl_mode:
conn_name = _auto_reconnect(client, ctx)
else:
conn_name = resolve_connection_name(ctx)
if conn_name:
request["connectionName"] = conn_name
result = client.call_tool("dax_query_operations", request)
format_mcp_result(result, ctx.json_output)
except Exception as e:
print_error(f"DAX validation failed: {e}")
raise SystemExit(1)
finally:
if not ctx.repl_mode:
client.stop()
session = get_session_for_command(ctx)
run_command(
ctx,
validate_dax,
adomd_connection=session.adomd_connection,
query=resolved_query,
timeout=timeout,
)
@dax.command(name="clear-cache")
@pass_context
def clear_cache(ctx: PbiContext) -> None:
def clear_cache_cmd(ctx: PbiContext) -> None:
"""Clear the DAX query cache."""
request: dict[str, object] = {"operation": "ClearCache"}
from pbi_cli.core.adomd_backend import clear_cache
from pbi_cli.core.session import get_session_for_command
client = get_client()
try:
if not ctx.repl_mode:
conn_name = _auto_reconnect(client, ctx)
else:
conn_name = resolve_connection_name(ctx)
if conn_name:
request["connectionName"] = conn_name
result = client.call_tool("dax_query_operations", request)
format_mcp_result(result, ctx.json_output)
except Exception as e:
print_error(f"Cache clear failed: {e}")
raise SystemExit(1)
finally:
if not ctx.repl_mode:
client.stop()
session = get_session_for_command(ctx)
db_id = str(session.database.ID) if session.database else ""
run_command(ctx, clear_cache, adomd_connection=session.adomd_connection, database_id=db_id)
def _resolve_query(query: str, query_file: str | None) -> str:

View file

@ -4,7 +4,7 @@ from __future__ import annotations
import click
from pbi_cli.commands._helpers import build_definition, run_tool
from pbi_cli.commands._helpers import run_command
from pbi_cli.main import PbiContext, pass_context
@ -17,7 +17,11 @@ def expression() -> None:
@pass_context
def expression_list(ctx: PbiContext) -> None:
"""List all named expressions."""
run_tool(ctx, "named_expression_operations", {"operation": "List"})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import expression_list as _expression_list
session = get_session_for_command(ctx)
run_command(ctx, _expression_list, model=session.model)
@expression.command()
@ -25,27 +29,31 @@ def expression_list(ctx: PbiContext) -> None:
@pass_context
def get(ctx: PbiContext, name: str) -> None:
"""Get a named expression."""
run_tool(ctx, "named_expression_operations", {"operation": "Get", "name": name})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import expression_get
session = get_session_for_command(ctx)
run_command(ctx, expression_get, model=session.model, name=name)
@expression.command()
@click.argument("name")
@click.option("--expression", "-e", required=True, help="M expression.")
@click.option("--expression", "-e", "expr", required=True, help="M expression.")
@click.option("--description", default=None, help="Expression description.")
@pass_context
def create(ctx: PbiContext, name: str, expression: str, description: str | None) -> None:
def create(ctx: PbiContext, name: str, expr: str, description: str | None) -> None:
"""Create a named expression."""
definition = build_definition(
required={"name": name, "expression": expression},
optional={"description": description},
)
run_tool(
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import expression_create
session = get_session_for_command(ctx)
run_command(
ctx,
"named_expression_operations",
{
"operation": "Create",
"definitions": [definition],
},
expression_create,
model=session.model,
name=name,
expression=expr,
description=description,
)
@ -54,25 +62,8 @@ def create(ctx: PbiContext, name: str, expression: str, description: str | None)
@pass_context
def delete(ctx: PbiContext, name: str) -> None:
"""Delete a named expression."""
run_tool(ctx, "named_expression_operations", {"operation": "Delete", "name": name})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import expression_delete
@expression.command(name="create-param")
@click.argument("name")
@click.option("--expression", "-e", required=True, help="Default value expression.")
@click.option("--description", default=None, help="Parameter description.")
@pass_context
def create_param(ctx: PbiContext, name: str, expression: str, description: str | None) -> None:
"""Create a model parameter."""
definition = build_definition(
required={"name": name, "expression": expression},
optional={"description": description},
)
run_tool(
ctx,
"named_expression_operations",
{
"operation": "CreateParameter",
"definitions": [definition],
},
)
session = get_session_for_command(ctx)
run_command(ctx, expression_delete, model=session.model, name=name)

View file

@ -4,7 +4,7 @@ from __future__ import annotations
import click
from pbi_cli.commands._helpers import build_definition, run_tool
from pbi_cli.commands._helpers import run_command
from pbi_cli.main import PbiContext, pass_context
@ -18,10 +18,11 @@ def hierarchy() -> None:
@pass_context
def hierarchy_list(ctx: PbiContext, table: str | None) -> None:
"""List hierarchies."""
request: dict[str, object] = {"operation": "List"}
if table:
request["tableName"] = table
run_tool(ctx, "user_hierarchy_operations", request)
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import hierarchy_list as _hierarchy_list
session = get_session_for_command(ctx)
run_command(ctx, _hierarchy_list, model=session.model, table_name=table)
@hierarchy.command()
@ -30,15 +31,11 @@ def hierarchy_list(ctx: PbiContext, table: str | None) -> None:
@pass_context
def get(ctx: PbiContext, name: str, table: str) -> None:
"""Get hierarchy details."""
run_tool(
ctx,
"user_hierarchy_operations",
{
"operation": "Get",
"name": name,
"tableName": table,
},
)
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import hierarchy_get
session = get_session_for_command(ctx)
run_command(ctx, hierarchy_get, model=session.model, table_name=table, name=name)
@hierarchy.command()
@ -48,11 +45,18 @@ def get(ctx: PbiContext, name: str, table: str) -> None:
@pass_context
def create(ctx: PbiContext, name: str, table: str, description: str | None) -> None:
"""Create a hierarchy."""
definition = build_definition(
required={"name": name, "tableName": table},
optional={"description": description},
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import hierarchy_create
session = get_session_for_command(ctx)
run_command(
ctx,
hierarchy_create,
model=session.model,
table_name=table,
name=name,
description=description,
)
run_tool(ctx, "user_hierarchy_operations", {"operation": "Create", "definitions": [definition]})
@hierarchy.command()
@ -61,12 +65,8 @@ def create(ctx: PbiContext, name: str, table: str, description: str | None) -> N
@pass_context
def delete(ctx: PbiContext, name: str, table: str) -> None:
"""Delete a hierarchy."""
run_tool(
ctx,
"user_hierarchy_operations",
{
"operation": "Delete",
"name": name,
"tableName": table,
},
)
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import hierarchy_delete
session = get_session_for_command(ctx)
run_command(ctx, hierarchy_delete, model=session.model, table_name=table, name=name)

View file

@ -6,7 +6,7 @@ import sys
import click
from pbi_cli.commands._helpers import build_definition, run_tool
from pbi_cli.commands._helpers import run_command
from pbi_cli.main import PbiContext, pass_context
@ -20,10 +20,11 @@ def measure() -> None:
@pass_context
def measure_list(ctx: PbiContext, table: str | None) -> None:
"""List all measures."""
request: dict[str, object] = {"operation": "List"}
if table:
request["tableName"] = table
run_tool(ctx, "measure_operations", request)
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import measure_list as _measure_list
session = get_session_for_command(ctx)
run_command(ctx, _measure_list, model=session.model, table_name=table)
@measure.command()
@ -32,15 +33,11 @@ def measure_list(ctx: PbiContext, table: str | None) -> None:
@pass_context
def get(ctx: PbiContext, name: str, table: str) -> None:
"""Get details of a specific measure."""
run_tool(
ctx,
"measure_operations",
{
"operation": "Get",
"name": name,
"tableName": table,
},
)
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import measure_get
session = get_session_for_command(ctx)
run_command(ctx, measure_get, model=session.model, table_name=table, measure_name=name)
@measure.command()
@ -66,22 +63,21 @@ def create(
if expression == "-":
expression = sys.stdin.read().strip()
definition = build_definition(
required={"name": name, "expression": expression, "tableName": table},
optional={
"formatString": format_string,
"description": description,
"displayFolder": folder,
"isHidden": hidden if hidden else None,
},
)
run_tool(
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import measure_create
session = get_session_for_command(ctx)
run_command(
ctx,
"measure_operations",
{
"operation": "Create",
"definitions": [definition],
},
measure_create,
model=session.model,
table_name=table,
name=name,
expression=expression,
format_string=format_string,
description=description,
display_folder=folder,
is_hidden=hidden,
)
@ -106,22 +102,20 @@ def update(
if expression == "-":
expression = sys.stdin.read().strip()
definition = build_definition(
required={"name": name, "tableName": table},
optional={
"expression": expression,
"formatString": format_string,
"description": description,
"displayFolder": folder,
},
)
run_tool(
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import measure_update
session = get_session_for_command(ctx)
run_command(
ctx,
"measure_operations",
{
"operation": "Update",
"definitions": [definition],
},
measure_update,
model=session.model,
table_name=table,
name=name,
expression=expression,
format_string=format_string,
description=description,
display_folder=folder,
)
@ -131,15 +125,11 @@ def update(
@pass_context
def delete(ctx: PbiContext, name: str, table: str) -> None:
"""Delete a measure."""
run_tool(
ctx,
"measure_operations",
{
"operation": "Delete",
"name": name,
"tableName": table,
},
)
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import measure_delete
session = get_session_for_command(ctx)
run_command(ctx, measure_delete, model=session.model, table_name=table, name=name)
@measure.command()
@ -149,15 +139,17 @@ def delete(ctx: PbiContext, name: str, table: str) -> None:
@pass_context
def rename(ctx: PbiContext, old_name: str, new_name: str, table: str) -> None:
"""Rename a measure."""
run_tool(
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import measure_rename
session = get_session_for_command(ctx)
run_command(
ctx,
"measure_operations",
{
"operation": "Rename",
"name": old_name,
"newName": new_name,
"tableName": table,
},
measure_rename,
model=session.model,
table_name=table,
old_name=old_name,
new_name=new_name,
)
@ -168,30 +160,15 @@ def rename(ctx: PbiContext, old_name: str, new_name: str, table: str) -> None:
@pass_context
def move(ctx: PbiContext, name: str, table: str, to_table: str) -> None:
"""Move a measure to a different table."""
run_tool(
ctx,
"measure_operations",
{
"operation": "Move",
"name": name,
"tableName": table,
"destinationTableName": to_table,
},
)
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import measure_move
@measure.command(name="export-tmdl")
@click.argument("name")
@click.option("--table", "-t", required=True, help="Table containing the measure.")
@pass_context
def export_tmdl(ctx: PbiContext, name: str, table: str) -> None:
"""Export a measure as TMDL."""
run_tool(
session = get_session_for_command(ctx)
run_command(
ctx,
"measure_operations",
{
"operation": "ExportTMDL",
"name": name,
"tableName": table,
},
measure_move,
model=session.model,
table_name=table,
name=name,
dest_table_name=to_table,
)

View file

@ -4,7 +4,9 @@ from __future__ import annotations
import click
from pbi_cli.commands._helpers import run_tool
from pbi_cli.commands._helpers import run_command
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import model_get, model_get_stats
from pbi_cli.main import PbiContext, pass_context
@ -17,40 +19,13 @@ def model() -> None:
@pass_context
def get(ctx: PbiContext) -> None:
"""Get model metadata."""
run_tool(ctx, "model_operations", {"operation": "Get"})
session = get_session_for_command(ctx)
run_command(ctx, model_get, model=session.model, database=session.database)
@model.command()
@pass_context
def stats(ctx: PbiContext) -> None:
"""Get model statistics."""
run_tool(ctx, "model_operations", {"operation": "GetStats"})
@model.command()
@click.option(
"--type",
"refresh_type",
type=click.Choice(["Automatic", "Full", "Calculate", "DataOnly", "Defragment"]),
default="Automatic",
help="Refresh type.",
)
@pass_context
def refresh(ctx: PbiContext, refresh_type: str) -> None:
"""Refresh the model."""
run_tool(ctx, "model_operations", {"operation": "Refresh", "refreshType": refresh_type})
@model.command()
@click.argument("new_name")
@pass_context
def rename(ctx: PbiContext, new_name: str) -> None:
"""Rename the model."""
run_tool(ctx, "model_operations", {"operation": "Rename", "newName": new_name})
@model.command(name="export-tmdl")
@pass_context
def export_tmdl(ctx: PbiContext) -> None:
"""Export the model as TMDL."""
run_tool(ctx, "model_operations", {"operation": "ExportTMDL"})
session = get_session_for_command(ctx)
run_command(ctx, model_get_stats, model=session.model)

View file

@ -4,7 +4,7 @@ from __future__ import annotations
import click
from pbi_cli.commands._helpers import build_definition, run_tool
from pbi_cli.commands._helpers import run_command
from pbi_cli.main import PbiContext, pass_context
@ -18,7 +18,11 @@ def partition() -> None:
@pass_context
def partition_list(ctx: PbiContext, table: str) -> None:
"""List partitions in a table."""
run_tool(ctx, "partition_operations", {"operation": "List", "tableName": table})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import partition_list as _partition_list
session = get_session_for_command(ctx)
run_command(ctx, _partition_list, model=session.model, table_name=table)
@partition.command()
@ -31,11 +35,19 @@ def create(
ctx: PbiContext, name: str, table: str, expression: str | None, mode: str | None
) -> None:
"""Create a partition."""
definition = build_definition(
required={"name": name, "tableName": table},
optional={"expression": expression, "mode": mode},
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import partition_create
session = get_session_for_command(ctx)
run_command(
ctx,
partition_create,
model=session.model,
table_name=table,
name=name,
expression=expression,
mode=mode,
)
run_tool(ctx, "partition_operations", {"operation": "Create", "definitions": [definition]})
@partition.command()
@ -44,7 +56,11 @@ def create(
@pass_context
def delete(ctx: PbiContext, name: str, table: str) -> None:
"""Delete a partition."""
run_tool(ctx, "partition_operations", {"operation": "Delete", "name": name, "tableName": table})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import partition_delete
session = get_session_for_command(ctx)
run_command(ctx, partition_delete, model=session.model, table_name=table, name=name)
@partition.command()
@ -53,12 +69,8 @@ def delete(ctx: PbiContext, name: str, table: str) -> None:
@pass_context
def refresh(ctx: PbiContext, name: str, table: str) -> None:
"""Refresh a partition."""
run_tool(
ctx,
"partition_operations",
{
"operation": "Refresh",
"name": name,
"tableName": table,
},
)
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import partition_refresh
session = get_session_for_command(ctx)
run_command(ctx, partition_refresh, model=session.model, table_name=table, name=name)

View file

@ -4,7 +4,7 @@ from __future__ import annotations
import click
from pbi_cli.commands._helpers import build_definition, run_tool
from pbi_cli.commands._helpers import run_command
from pbi_cli.main import PbiContext, pass_context
@ -17,7 +17,11 @@ def perspective() -> None:
@pass_context
def perspective_list(ctx: PbiContext) -> None:
"""List all perspectives."""
run_tool(ctx, "perspective_operations", {"operation": "List"})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import perspective_list as _perspective_list
session = get_session_for_command(ctx)
run_command(ctx, _perspective_list, model=session.model)
@perspective.command()
@ -26,8 +30,11 @@ def perspective_list(ctx: PbiContext) -> None:
@pass_context
def create(ctx: PbiContext, name: str, description: str | None) -> None:
"""Create a perspective."""
definition = build_definition(required={"name": name}, optional={"description": description})
run_tool(ctx, "perspective_operations", {"operation": "Create", "definitions": [definition]})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import perspective_create
session = get_session_for_command(ctx)
run_command(ctx, perspective_create, model=session.model, name=name, description=description)
@perspective.command()
@ -35,4 +42,8 @@ def create(ctx: PbiContext, name: str, description: str | None) -> None:
@pass_context
def delete(ctx: PbiContext, name: str) -> None:
"""Delete a perspective."""
run_tool(ctx, "perspective_operations", {"operation": "Delete", "name": name})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import perspective_delete
session = get_session_for_command(ctx)
run_command(ctx, perspective_delete, model=session.model, name=name)

View file

@ -4,7 +4,7 @@ from __future__ import annotations
import click
from pbi_cli.commands._helpers import build_definition, run_tool
from pbi_cli.commands._helpers import run_command
from pbi_cli.main import PbiContext, pass_context
@ -17,7 +17,11 @@ def relationship() -> None:
@pass_context
def relationship_list(ctx: PbiContext) -> None:
"""List all relationships."""
run_tool(ctx, "relationship_operations", {"operation": "List"})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import relationship_list as _rel_list
session = get_session_for_command(ctx)
run_command(ctx, _rel_list, model=session.model)
@relationship.command()
@ -25,7 +29,11 @@ def relationship_list(ctx: PbiContext) -> None:
@pass_context
def get(ctx: PbiContext, name: str) -> None:
"""Get details of a specific relationship."""
run_tool(ctx, "relationship_operations", {"operation": "Get", "name": name})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import relationship_get
session = get_session_for_command(ctx)
run_command(ctx, relationship_get, model=session.model, name=name)
@relationship.command()
@ -53,20 +61,22 @@ def create(
active: bool,
) -> None:
"""Create a new relationship."""
definition = build_definition(
required={
"fromTable": from_table,
"fromColumn": from_column,
"toTable": to_table,
"toColumn": to_column,
},
optional={
"name": name,
"crossFilteringBehavior": cross_filter,
"isActive": active,
},
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import relationship_create
session = get_session_for_command(ctx)
run_command(
ctx,
relationship_create,
model=session.model,
from_table=from_table,
from_column=from_column,
to_table=to_table,
to_column=to_column,
name=name,
cross_filter=cross_filter,
is_active=active,
)
run_tool(ctx, "relationship_operations", {"operation": "Create", "definitions": [definition]})
@relationship.command()
@ -74,7 +84,11 @@ def create(
@pass_context
def delete(ctx: PbiContext, name: str) -> None:
"""Delete a relationship."""
run_tool(ctx, "relationship_operations", {"operation": "Delete", "name": name})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import relationship_delete
session = get_session_for_command(ctx)
run_command(ctx, relationship_delete, model=session.model, name=name)
@relationship.command()
@ -82,7 +96,11 @@ def delete(ctx: PbiContext, name: str) -> None:
@pass_context
def activate(ctx: PbiContext, name: str) -> None:
"""Activate a relationship."""
run_tool(ctx, "relationship_operations", {"operation": "Activate", "name": name})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import relationship_set_active
session = get_session_for_command(ctx)
run_command(ctx, relationship_set_active, model=session.model, name=name, active=True)
@relationship.command()
@ -90,7 +108,11 @@ def activate(ctx: PbiContext, name: str) -> None:
@pass_context
def deactivate(ctx: PbiContext, name: str) -> None:
"""Deactivate a relationship."""
run_tool(ctx, "relationship_operations", {"operation": "Deactivate", "name": name})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import relationship_set_active
session = get_session_for_command(ctx)
run_command(ctx, relationship_set_active, model=session.model, name=name, active=False)
@relationship.command()
@ -98,12 +120,8 @@ def deactivate(ctx: PbiContext, name: str) -> None:
@pass_context
def find(ctx: PbiContext, table: str) -> None:
"""Find relationships involving a table."""
run_tool(ctx, "relationship_operations", {"operation": "Find", "tableName": table})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import relationship_find
@relationship.command(name="export-tmdl")
@click.argument("name")
@pass_context
def export_tmdl(ctx: PbiContext, name: str) -> None:
"""Export a relationship as TMDL."""
run_tool(ctx, "relationship_operations", {"operation": "ExportTMDL", "name": name})
session = get_session_for_command(ctx)
run_command(ctx, relationship_find, model=session.model, table_name=table)

View file

@ -12,9 +12,8 @@ from pbi_cli.main import PbiContext, pass_context
def repl(ctx: PbiContext) -> None:
"""Start an interactive REPL session.
Keeps the MCP server process alive across commands, avoiding the
2-3 second startup cost on each invocation. Type 'exit' or press
Ctrl+D to quit.
Keeps a persistent .NET connection alive across commands for
near-instant execution. Type 'exit' or press Ctrl+D to quit.
"""
from pbi_cli.utils.repl import start_repl

View file

@ -4,7 +4,7 @@ from __future__ import annotations
import click
from pbi_cli.commands._helpers import build_definition, run_tool
from pbi_cli.commands._helpers import run_command
from pbi_cli.main import PbiContext, pass_context
@ -17,7 +17,11 @@ def security_role() -> None:
@pass_context
def role_list(ctx: PbiContext) -> None:
"""List all security roles."""
run_tool(ctx, "security_role_operations", {"operation": "List"})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import role_list as _role_list
session = get_session_for_command(ctx)
run_command(ctx, _role_list, model=session.model)
@security_role.command()
@ -25,7 +29,11 @@ def role_list(ctx: PbiContext) -> None:
@pass_context
def get(ctx: PbiContext, name: str) -> None:
"""Get details of a security role."""
run_tool(ctx, "security_role_operations", {"operation": "Get", "name": name})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import role_get
session = get_session_for_command(ctx)
run_command(ctx, role_get, model=session.model, name=name)
@security_role.command()
@ -34,11 +42,11 @@ def get(ctx: PbiContext, name: str) -> None:
@pass_context
def create(ctx: PbiContext, name: str, description: str | None) -> None:
"""Create a new security role."""
definition = build_definition(
required={"name": name},
optional={"description": description},
)
run_tool(ctx, "security_role_operations", {"operation": "Create", "definitions": [definition]})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import role_create
session = get_session_for_command(ctx)
run_command(ctx, role_create, model=session.model, name=name, description=description)
@security_role.command()
@ -46,12 +54,8 @@ def create(ctx: PbiContext, name: str, description: str | None) -> None:
@pass_context
def delete(ctx: PbiContext, name: str) -> None:
"""Delete a security role."""
run_tool(ctx, "security_role_operations", {"operation": "Delete", "name": name})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import role_delete
@security_role.command(name="export-tmdl")
@click.argument("name")
@pass_context
def export_tmdl(ctx: PbiContext, name: str) -> None:
"""Export a security role as TMDL."""
run_tool(ctx, "security_role_operations", {"operation": "ExportTMDL", "name": name})
session = get_session_for_command(ctx)
run_command(ctx, role_delete, model=session.model, name=name)

View file

@ -1,75 +1,102 @@
"""pbi setup: download and manage the Power BI MCP binary."""
"""pbi setup: verify environment and install skills."""
from __future__ import annotations
import click
from pbi_cli.core.binary_manager import (
check_for_updates,
download_and_extract,
get_binary_info,
)
from pbi_cli.core.output import print_error, print_info, print_json, print_key_value, print_success
from pbi_cli.core.output import print_error, print_info, print_json, print_success
from pbi_cli.main import PbiContext, pass_context
@click.command()
@click.option("--version", "target_version", default=None, help="Specific version to install.")
@click.option("--check", is_flag=True, default=False, help="Check for updates without installing.")
@click.option("--info", is_flag=True, default=False, help="Show info about the current binary.")
@click.option("--info", is_flag=True, default=False, help="Show environment info.")
@pass_context
def setup(ctx: PbiContext, target_version: str | None, check: bool, info: bool) -> None:
"""Download and set up the Power BI MCP server binary.
def setup(ctx: PbiContext, info: bool) -> None:
"""Verify the pbi-cli environment is ready.
Run this once after installing pbi-cli to download the binary.
Checks that pythonnet and the bundled .NET DLLs are available.
Also installs Claude Code skills if applicable.
"""
if info:
_show_info(ctx.json_output)
return
if check:
_check_updates(ctx.json_output)
return
_install(target_version, ctx.json_output)
_verify(ctx.json_output)
def _show_info(json_output: bool) -> None:
"""Show binary info."""
info = get_binary_info()
"""Show environment info."""
from pbi_cli import __version__
from pbi_cli.core.dotnet_loader import _dll_dir
dll_path = _dll_dir()
dlls_found = list(dll_path.glob("*.dll")) if dll_path.exists() else []
result = {
"version": __version__,
"dll_path": str(dll_path),
"dlls_found": len(dlls_found),
"dll_names": [d.name for d in dlls_found],
}
# Check pythonnet
try:
import pythonnet # noqa: F401
result["pythonnet"] = "installed"
except ImportError:
result["pythonnet"] = "missing"
if json_output:
print_json(info)
print_json(result)
else:
print_key_value("Power BI MCP Binary", info)
print_info(f"pbi-cli v{result['version']}")
print_info(f"DLL path: {result['dll_path']}")
print_info(f"DLLs found: {result['dlls_found']}")
print_info(f"pythonnet: {result['pythonnet']}")
def _check_updates(json_output: bool) -> None:
"""Check for available updates."""
def _verify(json_output: bool) -> None:
"""Verify the environment is ready."""
errors: list[str] = []
# Check pythonnet
try:
installed, latest, update_available = check_for_updates()
result = {
"installed_version": installed,
"latest_version": latest,
"update_available": update_available,
}
import pythonnet # noqa: F401
except ImportError:
errors.append("pythonnet not installed. Run: pip install pythonnet")
# Check DLLs
from pbi_cli.core.dotnet_loader import _dll_dir
dll_path = _dll_dir()
if not dll_path.exists():
errors.append(f"DLL directory not found: {dll_path}")
else:
required = [
"Microsoft.AnalysisServices.Tabular.dll",
"Microsoft.AnalysisServices.AdomdClient.dll",
]
for name in required:
if not (dll_path / name).exists():
errors.append(f"Missing DLL: {name}")
if errors:
for err in errors:
print_error(err)
if json_output:
print_json(result)
elif update_available:
print_info(f"Update available: {installed} -> {latest}")
print_info("Run 'pbi setup' to update.")
else:
print_success(f"Up to date: v{installed}")
except Exception as e:
print_error(f"Failed to check for updates: {e}")
print_json({"status": "error", "errors": errors})
raise SystemExit(1)
def _install(version: str | None, json_output: bool) -> None:
"""Download and install the binary."""
# Install skills
try:
bin_path = download_and_extract(version)
if json_output:
print_json({"binary_path": str(bin_path), "status": "installed"})
except Exception as e:
print_error(f"Setup failed: {e}")
raise SystemExit(1)
from pbi_cli.commands.connection import _ensure_ready
_ensure_ready()
except Exception:
pass
if json_output:
print_json({"status": "ready"})
else:
print_success("Environment is ready.")

View file

@ -6,7 +6,7 @@ import sys
import click
from pbi_cli.commands._helpers import build_definition, run_tool
from pbi_cli.commands._helpers import run_command
from pbi_cli.main import PbiContext, pass_context
@ -19,7 +19,11 @@ def table() -> None:
@pass_context
def table_list(ctx: PbiContext) -> None:
"""List all tables."""
run_tool(ctx, "table_operations", {"operation": "List"})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import table_list as _table_list
session = get_session_for_command(ctx)
run_command(ctx, _table_list, model=session.model)
@table.command()
@ -27,7 +31,11 @@ def table_list(ctx: PbiContext) -> None:
@pass_context
def get(ctx: PbiContext, name: str) -> None:
"""Get details of a specific table."""
run_tool(ctx, "table_operations", {"operation": "Get", "name": name})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import table_get
session = get_session_for_command(ctx)
run_command(ctx, table_get, model=session.model, table_name=name)
@table.command()
@ -40,7 +48,6 @@ def get(ctx: PbiContext, name: str) -> None:
)
@click.option("--m-expression", default=None, help="M/Power Query expression (use - for stdin).")
@click.option("--dax-expression", default=None, help="DAX expression for calculated tables.")
@click.option("--sql-query", default=None, help="SQL query for DirectQuery.")
@click.option("--description", default=None, help="Table description.")
@click.option("--hidden", is_flag=True, default=False, help="Hide from client tools.")
@pass_context
@ -50,7 +57,6 @@ def create(
mode: str,
m_expression: str | None,
dax_expression: str | None,
sql_query: str | None,
description: str | None,
hidden: bool,
) -> None:
@ -60,24 +66,20 @@ def create(
if dax_expression == "-":
dax_expression = sys.stdin.read().strip()
definition = build_definition(
required={"name": name},
optional={
"mode": mode,
"mExpression": m_expression,
"daxExpression": dax_expression,
"sqlQuery": sql_query,
"description": description,
"isHidden": hidden if hidden else None,
},
)
run_tool(
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import table_create
session = get_session_for_command(ctx)
run_command(
ctx,
"table_operations",
{
"operation": "Create",
"definitions": [definition],
},
table_create,
model=session.model,
name=name,
mode=mode,
m_expression=m_expression,
dax_expression=dax_expression,
description=description,
is_hidden=hidden,
)
@ -86,7 +88,11 @@ def create(
@pass_context
def delete(ctx: PbiContext, name: str) -> None:
"""Delete a table."""
run_tool(ctx, "table_operations", {"operation": "Delete", "name": name})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import table_delete
session = get_session_for_command(ctx)
run_command(ctx, table_delete, model=session.model, table_name=name)
@table.command()
@ -101,14 +107,16 @@ def delete(ctx: PbiContext, name: str) -> None:
@pass_context
def refresh(ctx: PbiContext, name: str, refresh_type: str) -> None:
"""Refresh a table."""
run_tool(
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import table_refresh
session = get_session_for_command(ctx)
run_command(
ctx,
"table_operations",
{
"operation": "Refresh",
"name": name,
"refreshType": refresh_type,
},
table_refresh,
model=session.model,
table_name=name,
refresh_type=refresh_type,
)
@ -117,15 +125,11 @@ def refresh(ctx: PbiContext, name: str, refresh_type: str) -> None:
@pass_context
def schema(ctx: PbiContext, name: str) -> None:
"""Get the schema of a table."""
run_tool(ctx, "table_operations", {"operation": "GetSchema", "name": name})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import table_get_schema
@table.command(name="export-tmdl")
@click.argument("name")
@pass_context
def export_tmdl(ctx: PbiContext, name: str) -> None:
"""Export a table as TMDL."""
run_tool(ctx, "table_operations", {"operation": "ExportTMDL", "name": name})
session = get_session_for_command(ctx)
run_command(ctx, table_get_schema, model=session.model, table_name=name)
@table.command()
@ -134,14 +138,16 @@ def export_tmdl(ctx: PbiContext, name: str) -> None:
@pass_context
def rename(ctx: PbiContext, old_name: str, new_name: str) -> None:
"""Rename a table."""
run_tool(
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import table_rename
session = get_session_for_command(ctx)
run_command(
ctx,
"table_operations",
{
"operation": "Rename",
"name": old_name,
"newName": new_name,
},
table_rename,
model=session.model,
old_name=old_name,
new_name=new_name,
)
@ -151,12 +157,14 @@ def rename(ctx: PbiContext, old_name: str, new_name: str) -> None:
@pass_context
def mark_date_table(ctx: PbiContext, name: str, date_column: str) -> None:
"""Mark a table as a date table."""
run_tool(
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import table_mark_as_date
session = get_session_for_command(ctx)
run_command(
ctx,
"table_operations",
{
"operation": "MarkAsDateTable",
"name": name,
"dateColumn": date_column,
},
table_mark_as_date,
model=session.model,
table_name=name,
date_column=date_column,
)

View file

@ -4,7 +4,7 @@ from __future__ import annotations
import click
from pbi_cli.commands._helpers import run_tool
from pbi_cli.commands._helpers import run_command
from pbi_cli.main import PbiContext, pass_context
@ -17,21 +17,29 @@ def trace() -> None:
@pass_context
def start(ctx: PbiContext) -> None:
"""Start a diagnostic trace."""
run_tool(ctx, "trace_operations", {"operation": "Start"})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import trace_start
session = get_session_for_command(ctx)
run_command(ctx, trace_start, server=session.server)
@trace.command()
@pass_context
def stop(ctx: PbiContext) -> None:
"""Stop the active trace."""
run_tool(ctx, "trace_operations", {"operation": "Stop"})
from pbi_cli.core.tom_backend import trace_stop
run_command(ctx, trace_stop)
@trace.command()
@pass_context
def fetch(ctx: PbiContext) -> None:
"""Fetch trace events."""
run_tool(ctx, "trace_operations", {"operation": "Fetch"})
from pbi_cli.core.tom_backend import trace_fetch
run_command(ctx, trace_fetch)
@trace.command()
@ -39,4 +47,6 @@ def fetch(ctx: PbiContext) -> None:
@pass_context
def export(ctx: PbiContext, path: str) -> None:
"""Export trace events to a file."""
run_tool(ctx, "trace_operations", {"operation": "Export", "filePath": path})
from pbi_cli.core.tom_backend import trace_export
run_command(ctx, trace_export, path=path)

View file

@ -4,7 +4,7 @@ from __future__ import annotations
import click
from pbi_cli.commands._helpers import run_tool
from pbi_cli.commands._helpers import run_command
from pbi_cli.main import PbiContext, pass_context
@ -17,7 +17,11 @@ def transaction() -> None:
@pass_context
def begin(ctx: PbiContext) -> None:
"""Begin a new transaction."""
run_tool(ctx, "transaction_operations", {"operation": "Begin"})
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import transaction_begin
session = get_session_for_command(ctx)
run_command(ctx, transaction_begin, server=session.server)
@transaction.command()
@ -25,10 +29,11 @@ def begin(ctx: PbiContext) -> None:
@pass_context
def commit(ctx: PbiContext, transaction_id: str) -> None:
"""Commit the active or specified transaction."""
request: dict[str, object] = {"operation": "Commit"}
if transaction_id:
request["transactionId"] = transaction_id
run_tool(ctx, "transaction_operations", request)
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import transaction_commit
session = get_session_for_command(ctx)
run_command(ctx, transaction_commit, server=session.server, transaction_id=transaction_id)
@transaction.command()
@ -36,7 +41,8 @@ def commit(ctx: PbiContext, transaction_id: str) -> None:
@pass_context
def rollback(ctx: PbiContext, transaction_id: str) -> None:
"""Rollback the active or specified transaction."""
request: dict[str, object] = {"operation": "Rollback"}
if transaction_id:
request["transactionId"] = transaction_id
run_tool(ctx, "transaction_operations", request)
from pbi_cli.core.session import get_session_for_command
from pbi_cli.core.tom_backend import transaction_rollback
session = get_session_for_command(ctx)
run_command(ctx, transaction_rollback, server=session.server, transaction_id=transaction_id)

View file

@ -0,0 +1,126 @@
"""ADOMD.NET operations: DAX query execution.
Provides DAX execute, validate, and cache clearing via direct
ADOMD.NET interop. Results are returned as plain Python dicts.
"""
from __future__ import annotations
from typing import Any
def execute_dax(
adomd_connection: Any,
query: str,
max_rows: int | None = None,
timeout: int = 200,
) -> dict[str, Any]:
"""Execute a DAX query and return results.
Args:
adomd_connection: An open AdomdConnection.
query: The DAX query string (must start with EVALUATE).
max_rows: Optional row limit.
timeout: Query timeout in seconds.
Returns:
Dict with ``columns`` and ``rows`` keys.
"""
from pbi_cli.core.dotnet_loader import get_adomd_command_class
AdomdCommand = get_adomd_command_class()
cmd = AdomdCommand(query, adomd_connection)
cmd.CommandTimeout = timeout
reader = cmd.ExecuteReader()
# Read column headers
columns: list[str] = []
for i in range(reader.FieldCount):
columns.append(str(reader.GetName(i)))
# Read rows
rows: list[dict[str, Any]] = []
row_count = 0
while reader.Read():
if max_rows is not None and row_count >= max_rows:
break
row: dict[str, Any] = {}
for i, col_name in enumerate(columns):
val = reader.GetValue(i)
row[col_name] = _convert_value(val)
rows.append(row)
row_count += 1
reader.Close()
return {"columns": columns, "rows": rows}
def validate_dax(
adomd_connection: Any,
query: str,
timeout: int = 10,
) -> dict[str, Any]:
"""Validate a DAX query without returning data.
Wraps the query in EVALUATE ROW("v", 0) pattern to test parsing
without full execution.
"""
from pbi_cli.core.dotnet_loader import get_adomd_command_class
AdomdCommand = get_adomd_command_class()
# Use a lightweight wrapper to validate syntax
validate_query = query.strip()
cmd = AdomdCommand(validate_query, adomd_connection)
cmd.CommandTimeout = timeout
try:
reader = cmd.ExecuteReader()
reader.Close()
return {"valid": True, "query": query.strip()}
except Exception as e:
return {"valid": False, "error": str(e), "query": query.strip()}
def clear_cache(
adomd_connection: Any,
database_id: str = "",
) -> dict[str, str]:
"""Clear the Analysis Services cache via XMLA."""
from pbi_cli.core.dotnet_loader import get_adomd_command_class
AdomdCommand = get_adomd_command_class()
object_xml = ""
if database_id:
object_xml = f"<DatabaseID>{database_id}</DatabaseID>"
xmla = (
'<ClearCache xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">'
f"<Object>{object_xml}</Object>"
"</ClearCache>"
)
cmd = AdomdCommand(xmla, adomd_connection)
cmd.ExecuteNonQuery()
return {"status": "cache_cleared"}
def _convert_value(val: Any) -> Any:
"""Convert a .NET value to a Python-native type."""
if val is None:
return None
type_name = type(val).__name__
if type_name in ("Int32", "Int64", "Int16"):
return int(val)
if type_name in ("Double", "Single", "Decimal"):
return float(val)
if type_name == "Boolean":
return bool(val)
if type_name == "DateTime":
return str(val)
if type_name == "DBNull":
return None
return str(val)

View file

@ -1,247 +0,0 @@
"""Binary manager: download, extract, and resolve the Power BI MCP server binary.
The binary is a .NET executable distributed as part of a VS Code extension (VSIX).
This module handles downloading the VSIX from the VS Marketplace, extracting the
server binary, and resolving the binary path for the MCP client.
"""
from __future__ import annotations
import os
import shutil
import tempfile
import zipfile
from pathlib import Path
import httpx
from pbi_cli.core.config import PBI_CLI_HOME, ensure_home_dir, load_config, save_config
from pbi_cli.core.output import print_info, print_success
from pbi_cli.utils.platform import (
binary_name,
detect_platform,
ensure_executable,
find_vscode_extension_binary,
)
EXTENSION_ID = "analysis-services.powerbi-modeling-mcp"
PUBLISHER = "analysis-services"
EXTENSION_NAME = "powerbi-modeling-mcp"
MARKETPLACE_API = "https://marketplace.visualstudio.com/_apis/public/gallery/extensionquery"
VSIX_URL_TEMPLATE = (
"https://marketplace.visualstudio.com/_apis/public/gallery/publishers/"
"{publisher}/vsextensions/{extension}/{version}/vspackage"
"?targetPlatform={platform}"
)
def resolve_binary() -> Path:
"""Resolve the MCP server binary path using the priority chain.
Priority:
1. PBI_MCP_BINARY environment variable
2. ~/.pbi-cli/bin/{version}/ (auto-downloaded on first connect)
3. VS Code extension fallback
Raises FileNotFoundError if no binary is found.
"""
env_path = os.environ.get("PBI_MCP_BINARY")
if env_path:
p = Path(env_path)
if p.exists():
return p
raise FileNotFoundError(f"PBI_MCP_BINARY points to non-existent path: {env_path}")
config = load_config()
if config.binary_path:
p = Path(config.binary_path)
if p.exists():
return p
managed = _find_managed_binary()
if managed:
return managed
vscode_bin = find_vscode_extension_binary()
if vscode_bin:
print_info(f"Using VS Code extension binary: {vscode_bin}")
return vscode_bin
raise FileNotFoundError(
"Power BI MCP binary not found. Run 'pbi connect' or 'pbi setup' to download it, "
"or set PBI_MCP_BINARY environment variable."
)
def _find_managed_binary() -> Path | None:
"""Look for a binary in ~/.pbi-cli/bin/."""
bin_dir = PBI_CLI_HOME / "bin"
if not bin_dir.exists():
return None
versions = sorted(bin_dir.iterdir(), reverse=True)
for version_dir in versions:
candidate = version_dir / binary_name()
if candidate.exists():
return candidate
return None
def query_latest_version() -> str:
"""Query the VS Marketplace for the latest extension version.
Returns the version string (e.g., '0.4.0').
"""
payload = {
"filters": [
{
"criteria": [
{"filterType": 7, "value": EXTENSION_ID},
],
"pageNumber": 1,
"pageSize": 1,
}
],
"flags": 914,
}
headers = {
"Content-Type": "application/json",
"Accept": "application/json;api-version=6.1-preview.1",
}
with httpx.Client(timeout=30.0) as client:
resp = client.post(MARKETPLACE_API, json=payload, headers=headers)
resp.raise_for_status()
data = resp.json()
results = data.get("results", [])
if not results:
raise RuntimeError("No results from VS Marketplace query")
extensions = results[0].get("extensions", [])
if not extensions:
raise RuntimeError(f"Extension {EXTENSION_ID} not found on VS Marketplace")
versions = extensions[0].get("versions", [])
if not versions:
raise RuntimeError(f"No versions found for {EXTENSION_ID}")
return str(versions[0]["version"])
def download_and_extract(version: str | None = None) -> Path:
"""Download the VSIX and extract the server binary.
Args:
version: Specific version to download. If None, queries latest.
Returns:
Path to the extracted binary.
"""
if version is None:
print_info("Querying VS Marketplace for latest version...")
version = query_latest_version()
target_platform = detect_platform()
print_info(f"Downloading pbi-mcp v{version} for {target_platform}...")
url = VSIX_URL_TEMPLATE.format(
publisher=PUBLISHER,
extension=EXTENSION_NAME,
version=version,
platform=target_platform,
)
dest_dir = ensure_home_dir() / "bin" / version
dest_dir.mkdir(parents=True, exist_ok=True)
with tempfile.TemporaryDirectory() as tmp:
vsix_path = Path(tmp) / "extension.vsix"
with httpx.Client(timeout=120.0, follow_redirects=True) as client:
with client.stream("GET", url) as resp:
resp.raise_for_status()
total = int(resp.headers.get("content-length", 0))
downloaded = 0
with open(vsix_path, "wb") as f:
for chunk in resp.iter_bytes(chunk_size=8192):
f.write(chunk)
downloaded += len(chunk)
if total > 0:
pct = downloaded * 100 // total
print(f"\r Downloading... {pct}%", end="", flush=True)
print()
print_info("Extracting server binary...")
with zipfile.ZipFile(vsix_path, "r") as zf:
server_prefix = "extension/server/"
server_files = [n for n in zf.namelist() if n.startswith(server_prefix)]
if not server_files:
raise RuntimeError("No server/ directory found in VSIX package")
for file_name in server_files:
rel_path = file_name[len(server_prefix) :]
if not rel_path:
continue
target_path = dest_dir / rel_path
target_path.parent.mkdir(parents=True, exist_ok=True)
with zf.open(file_name) as src, open(target_path, "wb") as dst:
shutil.copyfileobj(src, dst)
bin_path = dest_dir / binary_name()
if not bin_path.exists():
raise RuntimeError(f"Binary not found after extraction: {bin_path}")
ensure_executable(bin_path)
config = load_config().with_updates(
binary_version=version,
binary_path=str(bin_path),
)
save_config(config)
print_success(f"Installed pbi-mcp v{version} at {dest_dir}")
return bin_path
def check_for_updates() -> tuple[str, str, bool]:
"""Compare installed version with latest available.
Returns (installed_version, latest_version, update_available).
"""
config = load_config()
installed = config.binary_version or "none"
latest = query_latest_version()
return installed, latest, installed != latest
def get_binary_info() -> dict[str, str]:
"""Return info about the currently resolved binary."""
try:
path = resolve_binary()
config = load_config()
return {
"binary_path": str(path),
"version": config.binary_version or "unknown",
"platform": detect_platform(),
"source": _binary_source(path),
}
except FileNotFoundError:
return {
"binary_path": "not found",
"version": "none",
"platform": detect_platform(),
"source": "none",
}
def _binary_source(path: Path) -> str:
"""Determine the source of a resolved binary path."""
path_str = str(path)
if "PBI_MCP_BINARY" in os.environ:
return "environment variable (PBI_MCP_BINARY)"
if ".pbi-cli" in path_str:
return "managed (auto-downloaded)"
if ".vscode" in path_str:
return "VS Code extension (fallback)"
return "unknown"

View file

@ -1,12 +1,12 @@
"""Configuration management for pbi-cli.
Manages ~/.pbi-cli/config.json for binary paths, versions, and preferences.
Manages ~/.pbi-cli/config.json for user preferences.
"""
from __future__ import annotations
import json
from dataclasses import asdict, dataclass, field
from dataclasses import asdict, dataclass
from pathlib import Path
PBI_CLI_HOME = Path.home() / ".pbi-cli"
@ -17,10 +17,7 @@ CONFIG_FILE = PBI_CLI_HOME / "config.json"
class PbiConfig:
"""Immutable configuration object."""
binary_version: str = ""
binary_path: str = ""
default_connection: str = ""
binary_args: list[str] = field(default_factory=lambda: ["--start", "--skipconfirmation"])
def with_updates(self, **kwargs: object) -> PbiConfig:
"""Return a new config with the specified fields updated."""
@ -42,10 +39,7 @@ def load_config() -> PbiConfig:
try:
raw = json.loads(CONFIG_FILE.read_text(encoding="utf-8"))
return PbiConfig(
binary_version=raw.get("binary_version", ""),
binary_path=raw.get("binary_path", ""),
default_connection=raw.get("default_connection", ""),
binary_args=raw.get("binary_args", ["--start", "--skipconfirmation"]),
)
except (json.JSONDecodeError, KeyError):
return PbiConfig()

View file

@ -0,0 +1,111 @@
"""CLR bootstrap: load pythonnet and Microsoft Analysis Services DLLs.
Uses .NET Framework (net45) DLLs bundled in ``pbi_cli/dlls/``.
Lazy-loaded on first access so import cost is zero until needed.
"""
from __future__ import annotations
import sys
from pathlib import Path
from typing import Any
_initialized = False
def _dll_dir() -> Path:
"""Return the path to the bundled DLL directory."""
return Path(__file__).resolve().parent.parent / "dlls"
def _ensure_initialized() -> None:
"""Initialize the CLR runtime and load Analysis Services assemblies.
Idempotent: safe to call multiple times.
"""
global _initialized
if _initialized:
return
try:
import pythonnet
from clr_loader import get_netfx
except ImportError as e:
raise ImportError(
"pythonnet is required for direct Power BI connection.\n"
"Install it with: pip install pythonnet"
) from e
rt = get_netfx()
pythonnet.set_runtime(rt)
import clr # noqa: E402 (must import after set_runtime)
dll_path = _dll_dir()
if not dll_path.exists():
raise FileNotFoundError(
f"Bundled DLL directory not found: {dll_path}\n"
"Reinstall pbi-cli-tool: pipx install pbi-cli-tool --force"
)
sys.path.insert(0, str(dll_path))
clr.AddReference("Microsoft.AnalysisServices.Tabular")
clr.AddReference("Microsoft.AnalysisServices.AdomdClient")
_initialized = True
def get_server_class() -> Any:
"""Return the ``Microsoft.AnalysisServices.Tabular.Server`` class."""
_ensure_initialized()
from Microsoft.AnalysisServices.Tabular import Server # type: ignore[import-untyped]
return Server
def get_adomd_connection_class() -> Any:
"""Return the ``AdomdConnection`` class."""
_ensure_initialized()
from Microsoft.AnalysisServices.AdomdClient import (
AdomdConnection, # type: ignore[import-untyped]
)
return AdomdConnection
def get_adomd_command_class() -> Any:
"""Return the ``AdomdCommand`` class."""
_ensure_initialized()
from Microsoft.AnalysisServices.AdomdClient import AdomdCommand # type: ignore[import-untyped]
return AdomdCommand
def get_tmdl_serializer() -> Any:
"""Return the ``TmdlSerializer`` class."""
_ensure_initialized()
from Microsoft.AnalysisServices.Tabular import TmdlSerializer # type: ignore[import-untyped]
return TmdlSerializer
def get_tom_classes(*names: str) -> tuple[Any, ...]:
"""Return one or more classes from ``Microsoft.AnalysisServices.Tabular``.
Example::
Measure, Table = get_tom_classes("Measure", "Table")
"""
_ensure_initialized()
import Microsoft.AnalysisServices.Tabular as TOM # type: ignore[import-untyped]
results: list[Any] = []
for name in names:
cls = getattr(TOM, name, None)
if cls is None:
raise AttributeError(
f"Class '{name}' not found in Microsoft.AnalysisServices.Tabular"
)
results.append(cls)
return tuple(results)

View file

@ -16,13 +16,13 @@ class PbiCliError(click.ClickException):
super().__init__(message)
class BinaryNotFoundError(PbiCliError):
"""Raised when the MCP server binary cannot be resolved."""
class DotNetNotFoundError(PbiCliError):
"""Raised when pythonnet or the bundled .NET DLLs are missing."""
def __init__(
self,
message: str = (
"Power BI MCP binary not found. Run 'pbi connect' or 'pbi setup' to download it."
"pythonnet is required. Install it with: pip install pythonnet"
),
) -> None:
super().__init__(message)
@ -35,10 +35,10 @@ class ConnectionRequiredError(PbiCliError):
super().__init__(message)
class McpToolError(PbiCliError):
"""Raised when an MCP tool call fails."""
class TomError(PbiCliError):
"""Raised when a TOM operation fails."""
def __init__(self, tool_name: str, detail: str) -> None:
self.tool_name = tool_name
def __init__(self, operation: str, detail: str) -> None:
self.operation = operation
self.detail = detail
super().__init__(f"{tool_name}: {detail}")
super().__init__(f"{operation}: {detail}")

View file

@ -1,252 +0,0 @@
"""MCP client: communicates with the Power BI MCP server binary over stdio.
Uses the official `mcp` Python SDK to handle JSON-RPC framing and protocol
negotiation. Exposes a synchronous API for Click commands while managing
an async event loop internally.
"""
from __future__ import annotations
import asyncio
import atexit
from pathlib import Path
from typing import Any
from mcp import ClientSession
from mcp.client.stdio import StdioServerParameters, stdio_client
from pbi_cli.core.binary_manager import resolve_binary
from pbi_cli.core.config import load_config
class McpClientError(Exception):
"""Raised when the MCP server returns an error."""
class PbiMcpClient:
"""Synchronous wrapper around the async MCP stdio client.
Usage:
client = PbiMcpClient()
result = client.call_tool("measure_operations", {
"operation": "List",
"connectionName": "my-conn",
})
"""
def __init__(
self,
binary_path: str | Path | None = None,
args: list[str] | None = None,
) -> None:
self._binary_path = str(binary_path) if binary_path else None
self._args = args
self._loop: asyncio.AbstractEventLoop | None = None
self._session: ClientSession | None = None
self._cleanup_stack: Any = None
self._started = False
def _resolve_binary(self) -> str:
"""Resolve binary path lazily."""
if self._binary_path:
return self._binary_path
return str(resolve_binary())
def _resolve_args(self) -> list[str]:
"""Resolve binary args from config or defaults."""
if self._args is not None:
return self._args
config = load_config()
return list(config.binary_args)
def _ensure_loop(self) -> asyncio.AbstractEventLoop:
"""Get or create the event loop."""
if self._loop is None or self._loop.is_closed():
self._loop = asyncio.new_event_loop()
return self._loop
def start(self) -> None:
"""Start the MCP server process and initialize the session."""
if self._started:
return
loop = self._ensure_loop()
loop.run_until_complete(self._async_start())
self._started = True
atexit.register(self.stop)
async def _async_start(self) -> None:
"""Async startup: spawn the server and initialize MCP session."""
binary = self._resolve_binary()
args = self._resolve_args()
server_params = StdioServerParameters(
command=binary,
args=args,
)
# Create the stdio transport
self._read_stream, self._write_stream = await self._enter_context(
stdio_client(server_params)
)
# Create and initialize the MCP session
self._session = await self._enter_context(
ClientSession(self._read_stream, self._write_stream)
)
await self._session.initialize()
async def _enter_context(self, cm: Any) -> Any:
"""Enter an async context manager and track it for cleanup."""
if self._cleanup_stack is None:
self._cleanup_stack = []
result = await cm.__aenter__()
self._cleanup_stack.append(cm)
return result
def call_tool(self, tool_name: str, request: dict[str, Any]) -> Any:
"""Call an MCP tool synchronously.
Args:
tool_name: The MCP tool name (e.g., "measure_operations").
request: The request dict (will be wrapped as {"request": request}).
Returns:
The parsed result from the MCP server.
Raises:
McpClientError: If the server returns an error.
"""
if not self._started:
self.start()
loop = self._ensure_loop()
return loop.run_until_complete(self._async_call_tool(tool_name, request))
async def _async_call_tool(self, tool_name: str, request: dict[str, Any]) -> Any:
"""Execute a tool call via the MCP session."""
if self._session is None:
raise McpClientError("MCP session not initialized. Call start() first.")
result = await self._session.call_tool(
tool_name,
arguments={"request": request},
)
if result.isError:
error_text = _extract_text(result.content)
raise McpClientError(f"MCP tool error: {error_text}")
return _parse_content(result.content)
def list_tools(self) -> list[dict[str, Any]]:
"""List all available MCP tools."""
if not self._started:
self.start()
loop = self._ensure_loop()
return loop.run_until_complete(self._async_list_tools())
async def _async_list_tools(self) -> list[dict[str, Any]]:
"""List tools from the MCP session."""
if self._session is None:
raise McpClientError("MCP session not initialized.")
result = await self._session.list_tools()
return [
{
"name": tool.name,
"description": tool.description or "",
}
for tool in result.tools
]
def stop(self) -> None:
"""Shut down the MCP server process."""
if not self._started:
return
loop = self._ensure_loop()
loop.run_until_complete(self._async_stop())
self._started = False
self._session = None
async def _async_stop(self) -> None:
"""Clean up all async context managers in reverse order."""
if self._cleanup_stack:
for cm in reversed(self._cleanup_stack):
try:
await cm.__aexit__(None, None, None)
except Exception:
pass
self._cleanup_stack = []
def __del__(self) -> None:
try:
self.stop()
except Exception:
pass
def _extract_text(content: Any) -> str:
"""Extract text from MCP content blocks."""
if isinstance(content, list):
parts = []
for block in content:
if hasattr(block, "text"):
parts.append(block.text)
return "\n".join(parts) if parts else str(content)
return str(content)
def _parse_content(content: Any) -> Any:
"""Parse MCP content blocks into Python data.
MCP returns content as a list of TextContent blocks. This function
tries to parse the text as JSON, falling back to raw text.
"""
import json
if isinstance(content, list):
texts = []
for block in content:
if hasattr(block, "text"):
texts.append(block.text)
if len(texts) == 1:
try:
return json.loads(texts[0])
except (json.JSONDecodeError, ValueError):
return texts[0]
combined = "\n".join(texts)
try:
return json.loads(combined)
except (json.JSONDecodeError, ValueError):
return combined
return content
# Module-level singleton for REPL mode (keeps server alive across commands).
_shared_client: PbiMcpClient | None = None
def get_shared_client() -> PbiMcpClient:
"""Get or create a shared MCP client instance."""
global _shared_client
if _shared_client is None:
_shared_client = PbiMcpClient()
return _shared_client
def get_client(repl_mode: bool = False) -> PbiMcpClient:
"""Get an MCP client.
In REPL mode, returns a shared long-lived client.
In one-shot mode, returns a fresh client (caller should stop() it).
"""
if repl_mode:
return get_shared_client()
return PbiMcpClient()

View file

@ -60,8 +60,8 @@ def print_key_value(title: str, data: dict[str, Any]) -> None:
console.print(Panel("\n".join(lines), title=title, border_style="cyan"))
def format_mcp_result(result: Any, json_output: bool) -> None:
"""Format and print an MCP tool result.
def format_result(result: Any, json_output: bool) -> None:
"""Format and print a command result.
In JSON mode, prints the raw result. In human mode, attempts to render
a table or key-value display based on the shape of the data.

141
src/pbi_cli/core/session.py Normal file
View file

@ -0,0 +1,141 @@
"""Connection session manager for Power BI Desktop.
Maintains a persistent connection to the Analysis Services engine,
reusable across commands in both REPL and one-shot modes.
"""
from __future__ import annotations
import atexit
from dataclasses import dataclass
from typing import Any
@dataclass(frozen=True)
class Session:
"""An active connection to a Power BI Analysis Services instance."""
server: Any # Microsoft.AnalysisServices.Tabular.Server
database: Any # Microsoft.AnalysisServices.Tabular.Database
model: Any # Microsoft.AnalysisServices.Tabular.Model
adomd_connection: Any # Microsoft.AnalysisServices.AdomdClient.AdomdConnection
connection_name: str
data_source: str
# Module-level session for REPL mode persistence
_current_session: Session | None = None
def connect(data_source: str, catalog: str = "") -> Session:
"""Connect to an Analysis Services instance.
Args:
data_source: The data source (e.g., ``localhost:57947``).
catalog: Optional initial catalog / database name.
Returns:
A new ``Session`` with active TOM and ADOMD connections.
"""
from pbi_cli.core.dotnet_loader import get_adomd_connection_class, get_server_class
Server = get_server_class()
AdomdConnection = get_adomd_connection_class()
conn_str = f"Provider=MSOLAP;Data Source={data_source}"
if catalog:
conn_str += f";Initial Catalog={catalog}"
server = Server()
server.Connect(conn_str)
# Pick the first database (PBI Desktop has exactly one)
db = server.Databases[0]
model = db.Model
# Build connection name from database info
db_name = str(db.Name) if db.Name else ""
connection_name = f"PBIDesktop-{db_name[:20]}-{data_source.split(':')[-1]}"
# ADOMD connection for DAX queries
adomd_conn = AdomdConnection(conn_str)
adomd_conn.Open()
session = Session(
server=server,
database=db,
model=model,
adomd_connection=adomd_conn,
connection_name=connection_name,
data_source=data_source,
)
global _current_session
_current_session = session
return session
def disconnect(session: Session | None = None) -> None:
"""Disconnect an active session."""
global _current_session
target = session or _current_session
if target is None:
return
try:
target.adomd_connection.Close()
except Exception:
pass
try:
target.server.Disconnect()
except Exception:
pass
if target is _current_session:
_current_session = None
def get_current_session() -> Session | None:
"""Return the current session, or None if not connected."""
return _current_session
def ensure_connected() -> Session:
"""Return the current session, raising if not connected."""
from pbi_cli.core.errors import ConnectionRequiredError
if _current_session is None:
raise ConnectionRequiredError()
return _current_session
def get_session_for_command(ctx: Any) -> Session:
"""Get or establish a session for a CLI command.
In REPL mode, returns the existing session.
In one-shot mode, reconnects from the saved connection store.
"""
global _current_session
if ctx.repl_mode and _current_session is not None:
return _current_session
# One-shot mode: reconnect from saved connection
from pbi_cli.core.connection_store import get_active_connection, load_connections
store = load_connections()
conn = get_active_connection(store, override=ctx.connection)
if conn is None:
from pbi_cli.core.errors import ConnectionRequiredError
raise ConnectionRequiredError()
return connect(conn.data_source, conn.initial_catalog)
@atexit.register
def _cleanup() -> None:
"""Disconnect on process exit."""
disconnect()

File diff suppressed because it is too large Load diff

Binary file not shown.

Binary file not shown.

View file

@ -0,0 +1,3 @@
# Bundled Microsoft Analysis Services .NET DLLs.
# Sourced from NuGet: Microsoft.AnalysisServices.NetCore.retail.amd64
# and Microsoft.AnalysisServices.AdomdClient.NetCore.retail.amd64

View file

@ -0,0 +1 @@
{"runtimeOptions": {"tfm": "net9.0", "framework": {"name": "Microsoft.NETCore.App", "version": "9.0.0"}, "rollForward": "LatestMajor"}}

View file

@ -40,10 +40,10 @@ pass_context = click.make_pass_decorator(PbiContext, ensure=True)
def cli(ctx: click.Context, json_output: bool, connection: str | None) -> None:
"""pbi-cli: Power BI semantic model CLI.
Wraps the Power BI MCP server for token-efficient usage with
Claude Code and other AI agents.
Connects directly to Power BI Desktop's Analysis Services engine
for token-efficient usage with Claude Code and other AI agents.
Run 'pbi connect' to auto-detect Power BI Desktop and download the MCP binary.
Run 'pbi connect' to auto-detect a running Power BI Desktop instance.
"""
ctx.ensure_object(PbiContext)
ctx.obj = PbiContext(json_output=json_output, connection=connection)
@ -55,7 +55,7 @@ def _register_commands() -> None:
from pbi_cli.commands.calc_group import calc_group
from pbi_cli.commands.calendar import calendar
from pbi_cli.commands.column import column
from pbi_cli.commands.connection import connect, connect_fabric, connections, disconnect
from pbi_cli.commands.connection import connect, connections, disconnect
from pbi_cli.commands.database import database
from pbi_cli.commands.dax import dax
from pbi_cli.commands.expression import expression
@ -75,7 +75,6 @@ def _register_commands() -> None:
cli.add_command(setup)
cli.add_command(connect)
cli.add_command(connect_fabric)
cli.add_command(disconnect)
cli.add_command(connections)
cli.add_command(dax)

View file

@ -12,7 +12,7 @@ Execute and validate DAX queries against connected Power BI models.
```bash
pipx install pbi-cli-tool
pbi connect # Auto-detects Power BI Desktop, downloads binary, installs skills
pbi connect # Auto-detects Power BI Desktop and installs skills
```
## Executing Queries
@ -30,8 +30,6 @@ echo "EVALUATE Sales" | pbi dax execute -
# With options
pbi dax execute "EVALUATE Sales" --max-rows 100
pbi dax execute "EVALUATE Sales" --metrics # Include execution metrics
pbi dax execute "EVALUATE Sales" --metrics-only # Metrics without data
pbi dax execute "EVALUATE Sales" --timeout 300 # Custom timeout (seconds)
# JSON output for scripting
@ -163,7 +161,6 @@ TOPN(
## Performance Tips
- Use `--metrics` to identify slow queries
- Use `--max-rows` to limit result sets during development
- Run `pbi dax clear-cache` before benchmarking
- Prefer `SUMMARIZECOLUMNS` over `SUMMARIZE` for grouping

View file

@ -1,18 +1,18 @@
---
name: Power BI Deployment
description: Deploy Power BI semantic models to Fabric workspaces, import and export TMDL and TMSL formats, and manage model lifecycle. Use when the user mentions deploying, publishing, migrating, or version-controlling Power BI models.
description: Import and export TMDL and TMSL formats, manage model lifecycle with transactions, and version-control Power BI semantic models. Use when the user mentions deploying, publishing, migrating, or version-controlling Power BI models.
tools: pbi-cli
---
# Power BI Deployment Skill
Manage model lifecycle with TMDL export/import and Fabric workspace deployment.
Manage model lifecycle with TMDL export/import, transactions, and version control.
## Prerequisites
```bash
pipx install pbi-cli-tool
pbi connect # Auto-detects Power BI Desktop, downloads binary, installs skills
pbi connect # Auto-detects Power BI Desktop and installs skills
```
## Connecting to Targets
@ -24,13 +24,11 @@ pbi connect
# Local with explicit port
pbi connect -d localhost:54321
# Fabric workspace (cloud)
pbi connect-fabric --workspace "Production" --model "Sales Model"
# Named connections for switching
pbi connect --name dev
pbi connect-fabric --workspace "Production" --model "Sales" --name prod
pbi connect -d localhost:54321 --name dev
pbi connections list
pbi connections last
pbi disconnect
```
## TMDL Export and Import
@ -43,13 +41,6 @@ pbi database export-tmdl ./model-tmdl/
# Import TMDL folder into connected model
pbi database import-tmdl ./model-tmdl/
# Export individual objects
pbi model export-tmdl # Full model definition
pbi table export-tmdl Sales # Single table
pbi measure export-tmdl "Total Revenue" -t Sales # Single measure
pbi relationship export-tmdl RelName # Single relationship
pbi security-role export-tmdl "Readers" # Security role
```
## TMSL Export
@ -85,18 +76,14 @@ pbi transaction commit
pbi transaction rollback
```
## Model Refresh
## Table Refresh
```bash
# Refresh entire model
pbi model refresh # Automatic (default)
pbi model refresh --type Full # Full refresh
pbi model refresh --type Calculate # Recalculate only
pbi model refresh --type DataOnly # Data only, no recalc
pbi model refresh --type Defragment # Defragment storage
# Refresh individual tables
pbi table refresh Sales --type Full
pbi table refresh Sales --type Automatic
pbi table refresh Sales --type Calculate
pbi table refresh Sales --type DataOnly
```
## Workflow: Version Control with Git
@ -110,26 +97,11 @@ cd model/
git add .
git commit -m "feat: add new revenue measures"
# 3. Deploy to another environment
pbi connect-fabric --workspace "Staging" --model "Sales Model"
# 3. Later, import back into Power BI Desktop
pbi connect
pbi database import-tmdl ./model/
```
## Workflow: Promote Dev to Production
```bash
# 1. Connect to dev and export
pbi connect --data-source localhost:54321 --name dev
pbi database export-tmdl ./staging-model/
# 2. Connect to production and import
pbi connect-fabric --workspace "Production" --model "Sales" --name prod
pbi database import-tmdl ./staging-model/
# 3. Refresh production data
pbi model refresh --type Full
```
## Workflow: Inspect Model Before Deploy
```bash
@ -152,4 +124,4 @@ pbi --json relationship list
- Test changes in dev before deploying to production
- Use `--json` for scripted deployments
- Store TMDL in git for version history
- Use named connections (`--name`) to avoid accidental deployments to wrong environment
- Use named connections (`--name`) to avoid accidental changes to wrong environment

View file

@ -0,0 +1,139 @@
---
name: Power BI Diagnostics
description: Troubleshoot Power BI model performance, trace query execution, manage caches, and verify the pbi-cli environment. Use when the user mentions slow queries, performance issues, tracing, profiling, or setup problems.
tools: pbi-cli
---
# Power BI Diagnostics Skill
Troubleshoot performance, trace queries, and verify the pbi-cli environment.
## Prerequisites
```bash
pipx install pbi-cli-tool
pbi connect # Auto-detects Power BI Desktop and installs skills
```
## Environment Check
```bash
# Verify pythonnet and .NET DLLs are installed
pbi setup
# Show detailed environment info (version, DLL paths, pythonnet status)
pbi setup --info
pbi --json setup --info
# Check CLI version
pbi --version
```
## Model Health Check
```bash
# Quick model overview
pbi --json model get
# Object counts (tables, columns, measures, relationships, partitions)
pbi --json model stats
# List all tables with column/measure counts
pbi --json table list
```
## Query Tracing
Capture diagnostic events during DAX query execution:
```bash
# Start a trace
pbi trace start
# Execute the query you want to profile
pbi dax execute "EVALUATE SUMMARIZECOLUMNS(Products[Category], \"Total\", SUM(Sales[Amount]))"
# Stop the trace
pbi trace stop
# Fetch captured trace events
pbi --json trace fetch
# Export trace events to a file
pbi trace export ./trace-output.json
```
## Cache Management
```bash
# Clear the formula engine cache (do this before benchmarking)
pbi dax clear-cache
```
## Connection Diagnostics
```bash
# List all saved connections
pbi connections list
pbi --json connections list
# Show the last-used connection
pbi connections last
# Reconnect to a specific data source
pbi connect -d localhost:54321
# Disconnect
pbi disconnect
```
## Workflow: Profile a Slow Query
```bash
# 1. Clear cache for a clean benchmark
pbi dax clear-cache
# 2. Start tracing
pbi trace start
# 3. Run the slow query
pbi dax execute "EVALUATE SUMMARIZECOLUMNS(Products[Category], \"Total\", SUM(Sales[Amount]))" --timeout 300
# 4. Stop tracing
pbi trace stop
# 5. Export trace for analysis
pbi trace export ./slow-query-trace.json
# 6. Review trace events
pbi --json trace fetch
```
## Workflow: Model Health Audit
```bash
# 1. Model overview
pbi --json model get
pbi --json model stats
# 2. Check table sizes and structure
pbi --json table list
# 3. Review relationships
pbi --json relationship list
# 4. Check security roles
pbi --json security-role list
# 5. Export full model for offline review
pbi database export-tmdl ./audit-export/
```
## Best Practices
- Clear cache before benchmarking: `pbi dax clear-cache`
- Use `--timeout` for long-running queries to avoid premature cancellation
- Export traces to files for sharing with teammates
- Run `pbi setup --info` first when troubleshooting environment issues
- Use `--json` output for automated monitoring scripts
- Use `pbi repl` for interactive debugging sessions with persistent connection

View file

@ -12,7 +12,7 @@ Generate comprehensive documentation for Power BI semantic models.
```bash
pipx install pbi-cli-tool
pbi connect # Auto-detects Power BI Desktop, downloads binary, installs skills
pbi connect # Auto-detects Power BI Desktop and installs skills
```
## Quick Model Overview
@ -55,8 +55,14 @@ pbi --json calc-group list
# Perspectives
pbi --json perspective list
# Named expressions
# Named expressions (M queries)
pbi --json expression list
# Partitions
pbi --json partition list --table Sales
# Calendar/date tables
pbi --json calendar list
```
## Export Full Model as TMDL
@ -117,23 +123,23 @@ Create a complete measure inventory:
# List all measures with expressions
pbi --json measure list
# Export individual measure definitions as TMDL
pbi measure export-tmdl "Total Revenue" --table Sales
pbi measure export-tmdl "YTD Revenue" --table Sales
# Export full model as TMDL (includes all measure definitions)
pbi database export-tmdl ./tmdl-export/
```
## Translation and Culture Management
## Culture Management
For multi-language documentation:
For multi-language models:
```bash
# List cultures/translations
# List cultures (locales)
pbi --json advanced culture list
pbi --json advanced translation list
# Create culture for localization
# Create a culture for localization
pbi advanced culture create "fr-FR"
pbi advanced translation create --culture "fr-FR" --object "Total Sales" --translation "Ventes Totales"
# Delete a culture
pbi advanced culture delete "fr-FR"
```
## Best Practices

View file

@ -12,7 +12,7 @@ Use pbi-cli to manage semantic model structure. Requires `pipx install pbi-cli-t
```bash
pipx install pbi-cli-tool
pbi connect # Auto-detects Power BI Desktop, downloads binary, installs skills
pbi connect # Auto-detects Power BI Desktop and installs skills
```
## Tables
@ -26,7 +26,6 @@ pbi table rename OldName NewName # Rename table
pbi table refresh Sales --type Full # Refresh table data
pbi table schema Sales # Get table schema
pbi table mark-date Calendar --date-column Date # Mark as date table
pbi table export-tmdl Sales # Export as TMDL
```
## Columns
@ -53,7 +52,6 @@ pbi measure update "Total Revenue" -t Sales -e "SUMX(Sales, Sales[Qty]*Sales[Pri
pbi measure delete "Old Measure" -t Sales # Delete
pbi measure rename "Old" "New" -t Sales # Rename
pbi measure move "Revenue" -t Sales --to-table Finance # Move to another table
pbi measure export-tmdl "Total Revenue" -t Sales # Export as TMDL
```
## Relationships
@ -65,7 +63,9 @@ pbi relationship create \
--from-table Sales --from-column ProductKey \
--to-table Products --to-column ProductKey # Create relationship
pbi relationship delete RelName # Delete
pbi relationship export-tmdl RelName # Export as TMDL
pbi relationship find --table Sales # Find relationships for a table
pbi relationship activate RelName # Activate
pbi relationship deactivate RelName # Deactivate
```
## Hierarchies
@ -74,7 +74,6 @@ pbi relationship export-tmdl RelName # Export as TMDL
pbi hierarchy list --table Date # List hierarchies
pbi hierarchy get "Calendar" --table Date # Get details
pbi hierarchy create "Calendar" --table Date # Create
pbi hierarchy add-level "Calendar" --table Date --column Year --ordinal 0 # Add level
pbi hierarchy delete "Calendar" --table Date # Delete
```
@ -124,4 +123,4 @@ pbi relationship list
- Organize measures into display folders by business domain
- Always mark calendar tables with `mark-date` for time intelligence
- Use `--json` flag when scripting: `pbi --json measure list`
- Export TMDL for version control: `pbi table export-tmdl Sales`
- Export TMDL for version control: `pbi database export-tmdl ./model/`

View file

@ -0,0 +1,133 @@
---
name: Power BI Partitions & Expressions
description: Manage Power BI table partitions, named expressions (M/Power Query sources), and calendar table configuration. Use when the user mentions partitions, data sources, M expressions, Power Query, incremental refresh, or calendar/date tables.
tools: pbi-cli
---
# Power BI Partitions & Expressions Skill
Manage table partitions, named expressions (M queries), and calendar tables.
## Prerequisites
```bash
pipx install pbi-cli-tool
pbi connect # Auto-detects Power BI Desktop and installs skills
```
## Partitions
Partitions define how data is loaded into a table. Each table has at least one partition.
```bash
# List partitions in a table
pbi partition list --table Sales
pbi --json partition list --table Sales
# Create a partition with an M expression
pbi partition create "Sales_2024" --table Sales \
--expression "let Source = Sql.Database(\"server\", \"db\"), Sales = Source{[Schema=\"dbo\",Item=\"Sales\"]}[Data], Filtered = Table.SelectRows(Sales, each [Year] = 2024) in Filtered" \
--mode Import
# Create a partition with DirectQuery mode
pbi partition create "Sales_Live" --table Sales --mode DirectQuery
# Delete a partition
pbi partition delete "Sales_Old" --table Sales
# Refresh a specific partition
pbi partition refresh "Sales_2024" --table Sales
```
## Named Expressions
Named expressions are shared M/Power Query definitions used as data sources or reusable query logic.
```bash
# List all named expressions
pbi expression list
pbi --json expression list
# Get a specific expression
pbi expression get "ServerURL"
pbi --json expression get "ServerURL"
# Create a named expression (M query)
pbi expression create "ServerURL" \
--expression '"https://api.example.com/data"' \
--description "API endpoint for data refresh"
# Create a parameterized data source
pbi expression create "DatabaseServer" \
--expression '"sqlserver.company.com"' \
--description "Production database server name"
# Delete a named expression
pbi expression delete "OldSource"
```
## Calendar Tables
Calendar/date tables enable time intelligence in DAX. Mark a table as a date table to unlock functions like TOTALYTD, SAMEPERIODLASTYEAR, etc.
```bash
# List all calendar/date tables
pbi calendar list
pbi --json calendar list
# Mark a table as a calendar table
pbi calendar mark Calendar --date-column Date
# Alternative: use the table command
pbi table mark-date Calendar --date-column Date
```
## Workflow: Set Up Partitioned Table
```bash
# 1. Create a table
pbi table create Sales --mode Import
# 2. Create partitions for different date ranges
pbi partition create "Sales_2023" --table Sales \
--expression "let Source = ... in Filtered2023" \
--mode Import
pbi partition create "Sales_2024" --table Sales \
--expression "let Source = ... in Filtered2024" \
--mode Import
# 3. Refresh specific partitions
pbi partition refresh "Sales_2024" --table Sales
# 4. Verify partitions
pbi --json partition list --table Sales
```
## Workflow: Manage Data Sources
```bash
# 1. List current data source expressions
pbi --json expression list
# 2. Create shared connection parameters
pbi expression create "ServerName" \
--expression '"prod-sql-01.company.com"' \
--description "Production SQL Server"
pbi expression create "DatabaseName" \
--expression '"SalesDB"' \
--description "Production database"
# 3. Verify
pbi --json expression list
```
## Best Practices
- Use partitions for large tables to enable incremental refresh
- Refresh only the partitions that have new data (`pbi partition refresh`)
- Use named expressions for shared connection parameters (server names, URLs)
- Always mark calendar tables with `pbi calendar mark` for time intelligence
- Use `--json` output for scripted partition management
- Export model as TMDL to version-control partition definitions: `pbi database export-tmdl ./model/`

View file

@ -12,7 +12,7 @@ Manage row-level security (RLS) and perspectives for Power BI models.
```bash
pipx install pbi-cli-tool
pbi connect # Auto-detects Power BI Desktop, downloads binary, installs skills
pbi connect # Auto-detects Power BI Desktop and installs skills
```
## Security Roles (RLS)
@ -30,9 +30,6 @@ pbi security-role create "Regional Manager" \
# Delete a role
pbi security-role delete "Regional Manager"
# Export role as TMDL
pbi security-role export-tmdl "Regional Manager"
```
## Perspectives
@ -60,9 +57,8 @@ pbi security-role create "Finance Team" --description "Finance data only"
# 2. Verify roles were created
pbi --json security-role list
# 3. Export for version control
pbi security-role export-tmdl "Sales Team"
pbi security-role export-tmdl "Finance Team"
# 3. Export full model for version control (includes roles)
pbi database export-tmdl ./model-backup/
```
## Workflow: Create User-Focused Perspectives
@ -108,7 +104,7 @@ pbi security-role create "Manager View" \
- Create roles with clear, descriptive names
- Always add descriptions explaining the access restriction
- Export roles as TMDL for version control
- Export model as TMDL for version control (`pbi database export-tmdl`)
- Test RLS thoroughly before publishing to production
- Use perspectives to simplify the model for different user groups
- Document role-to-group mappings externally (RLS roles map to Azure AD groups in Power BI Service)

View file

@ -1,61 +1,10 @@
"""Platform and architecture detection for binary resolution."""
"""Platform detection and Power BI Desktop port discovery."""
from __future__ import annotations
import platform
import stat
from pathlib import Path
# Maps (system, machine) to VS Marketplace target platform identifier.
PLATFORM_MAP: dict[tuple[str, str], str] = {
("Windows", "AMD64"): "win32-x64",
("Windows", "x86_64"): "win32-x64",
("Windows", "ARM64"): "win32-arm64",
("Darwin", "arm64"): "darwin-arm64",
("Linux", "x86_64"): "linux-x64",
("Linux", "aarch64"): "linux-arm64",
}
# Binary name per OS.
BINARY_NAMES: dict[str, str] = {
"Windows": "powerbi-modeling-mcp.exe",
"Darwin": "powerbi-modeling-mcp",
"Linux": "powerbi-modeling-mcp",
}
def detect_platform() -> str:
"""Return the VS Marketplace target platform string for this machine.
Raises ValueError if the platform is unsupported.
"""
system = platform.system()
machine = platform.machine()
key = (system, machine)
target = PLATFORM_MAP.get(key)
if target is None:
raise ValueError(
f"Unsupported platform: {system}/{machine}. "
f"Supported: {', '.join(f'{s}/{m}' for s, m in PLATFORM_MAP)}"
)
return target
def binary_name() -> str:
"""Return the expected binary filename for this OS."""
system = platform.system()
name = BINARY_NAMES.get(system)
if name is None:
raise ValueError(f"Unsupported OS: {system}")
return name
def ensure_executable(path: Path) -> None:
"""Set executable permission on non-Windows systems."""
if platform.system() != "Windows":
current = path.stat().st_mode
path.chmod(current | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH)
def _workspace_candidates() -> list[Path]:
"""Return candidate AnalysisServicesWorkspaces directories.
@ -116,27 +65,3 @@ def discover_pbi_port() -> int | None:
return int(port_text)
except (ValueError, OSError):
return None
def find_vscode_extension_binary() -> Path | None:
"""Look for the binary in the VS Code extension install directory.
This is the fallback resolution path when the user has the VS Code
extension installed but hasn't run 'pbi connect' or 'pbi setup'.
"""
vscode_ext_dir = Path.home() / ".vscode" / "extensions"
if not vscode_ext_dir.exists():
return None
matches = sorted(
vscode_ext_dir.glob("analysis-services.powerbi-modeling-mcp-*/server"),
reverse=True,
)
if not matches:
return None
server_dir = matches[0]
bin_path = server_dir / binary_name()
if bin_path.exists():
return bin_path
return None

View file

@ -1,7 +1,7 @@
"""Interactive REPL for pbi-cli with persistent MCP connection.
"""Interactive REPL for pbi-cli with persistent session.
Keeps the Power BI MCP server process alive across commands so that
subsequent calls skip the startup cost (~2-3 seconds per invocation).
Keeps a direct .NET connection alive across commands so that
subsequent calls are near-instant (no reconnection overhead).
Usage:
pbi repl
@ -19,9 +19,8 @@ from prompt_toolkit.completion import WordCompleter
from prompt_toolkit.history import FileHistory
from pbi_cli.core.config import PBI_CLI_HOME, ensure_home_dir
from pbi_cli.core.connection_store import load_connections
from pbi_cli.core.mcp_client import get_shared_client
from pbi_cli.core.output import print_error, print_info, print_warning
from pbi_cli.core.session import get_current_session
_QUIT_COMMANDS = frozenset({"exit", "quit", "q"})
_HISTORY_FILE = PBI_CLI_HOME / "repl_history"
@ -54,9 +53,9 @@ class PbiRepl:
def _get_prompt(self) -> str:
"""Dynamic prompt showing active connection name."""
store = load_connections()
if store.last_used:
return f"pbi({store.last_used})> "
session = get_current_session()
if session is not None:
return f"pbi({session.connection_name})> "
return "pbi> "
def _execute_line(self, line: str) -> None:
@ -119,14 +118,6 @@ class PbiRepl:
print_info("pbi-cli interactive mode. Type 'exit' or Ctrl+D to quit.")
# Pre-warm the shared MCP server
try:
client = get_shared_client()
client.start()
except Exception as e:
print_warning(f"Could not pre-warm MCP server: {e}")
print_info("Commands will start the server on first use.")
try:
while True:
try:
@ -139,11 +130,9 @@ class PbiRepl:
except EOFError:
pass
finally:
# Shut down the shared MCP server
try:
get_shared_client().stop()
except Exception:
pass
from pbi_cli.core.session import disconnect
disconnect()
print_info("Goodbye.")

View file

@ -1,7 +1,8 @@
"""Shared test fixtures for pbi-cli."""
"""Shared test fixtures for pbi-cli v2 (direct .NET backend)."""
from __future__ import annotations
from dataclasses import dataclass
from pathlib import Path
from typing import Any
@ -9,96 +10,241 @@ import pytest
from click.testing import CliRunner
# ---------------------------------------------------------------------------
# Canned MCP responses used by the mock client
# ---------------------------------------------------------------------------
CANNED_RESPONSES: dict[str, dict[str, Any]] = {
"connection_operations": {
"Connect": {"status": "connected", "connectionName": "test-conn"},
"ConnectFabric": {"status": "connected", "connectionName": "ws/model"},
"Disconnect": {"status": "disconnected"},
},
"dax_query_operations": {
"Execute": {"columns": ["Amount"], "rows": [{"Amount": 42}]},
"Validate": {"isValid": True},
"ClearCache": {"status": "cleared"},
},
"measure_operations": {
"List": [
{"name": "Total Sales", "expression": "SUM(Sales[Amount])", "tableName": "Sales"},
],
"Get": {"name": "Total Sales", "expression": "SUM(Sales[Amount])"},
"Create": {"status": "created"},
"Update": {"status": "updated"},
"Delete": {"status": "deleted"},
"Rename": {"status": "renamed"},
"Move": {"status": "moved"},
"ExportTMDL": "measure 'Total Sales'\n expression = SUM(Sales[Amount])",
},
"table_operations": {
"List": [{"name": "Sales", "mode": "Import"}],
"Get": {"name": "Sales", "mode": "Import", "columns": []},
"Create": {"status": "created"},
"Delete": {"status": "deleted"},
"Refresh": {"status": "refreshed"},
"GetSchema": {"name": "Sales", "columns": [{"name": "Amount", "type": "double"}]},
"ExportTMDL": "table Sales\n mode: Import",
"Rename": {"status": "renamed"},
"MarkAsDateTable": {"status": "marked"},
},
"model_operations": {
"Get": {"name": "My Model", "compatibilityLevel": 1600},
"GetStats": {"tables": 5, "measures": 10, "columns": 30},
"Refresh": {"status": "refreshed"},
"Rename": {"status": "renamed"},
"ExportTMDL": "model Model\n culture: en-US",
},
"column_operations": {
"List": [{"name": "Amount", "tableName": "Sales", "dataType": "double"}],
"Get": {"name": "Amount", "dataType": "double"},
"Create": {"status": "created"},
"Update": {"status": "updated"},
"Delete": {"status": "deleted"},
"Rename": {"status": "renamed"},
"ExportTMDL": "column Amount\n dataType: double",
},
}
# ---------------------------------------------------------------------------
# Mock MCP client
# Mock TOM objects used by the mock session
# ---------------------------------------------------------------------------
class MockPbiMcpClient:
"""Fake MCP client returning canned responses without spawning a process."""
class MockCollection:
"""Simulates a .NET ICollection (iterable, with Count/Add/Remove)."""
def __init__(self, responses: dict[str, dict[str, Any]] | None = None) -> None:
self.responses = responses or CANNED_RESPONSES
self.started = False
self.stopped = False
self.calls: list[tuple[str, dict[str, Any]]] = []
def __init__(self, items: list[Any] | None = None) -> None:
self._items = list(items or [])
def start(self) -> None:
self.started = True
def __iter__(self) -> Any:
return iter(self._items)
def stop(self) -> None:
self.stopped = True
def __getitem__(self, index: int) -> Any:
return self._items[index]
def call_tool(self, tool_name: str, request: dict[str, Any]) -> Any:
self.calls.append((tool_name, request))
operation = request.get("operation", "")
tool_responses = self.responses.get(tool_name, {})
if operation in tool_responses:
return tool_responses[operation]
return {"status": "ok"}
@property
def Count(self) -> int:
return len(self._items)
def list_tools(self) -> list[dict[str, Any]]:
return [
{"name": "measure_operations", "description": "Measure CRUD"},
{"name": "table_operations", "description": "Table CRUD"},
{"name": "dax_query_operations", "description": "DAX queries"},
]
def Add(self, item: Any = None) -> Any:
if item is not None:
self._items.append(item)
return item
# Parameterless Add() -- create a simple object and return it
obj = type("TraceObj", (), {
"Name": "", "AutoRestart": False, "ID": "trace-1",
"Update": lambda self: None,
"Start": lambda self: None,
"Stop": lambda self: None,
})()
self._items.append(obj)
return obj
def Remove(self, item: Any) -> None:
self._items.remove(item)
@dataclass
class MockMeasure:
Name: str = "Total Sales"
Expression: str = "SUM(Sales[Amount])"
DisplayFolder: str = ""
Description: str = ""
FormatString: str = ""
IsHidden: bool = False
@dataclass
class MockColumn:
Name: str = "Amount"
DataType: str = "Double"
Type: str = "DataColumn"
SourceColumn: str = "Amount"
DisplayFolder: str = ""
Description: str = ""
FormatString: str = ""
IsHidden: bool = False
IsKey: bool = False
@dataclass
class MockPartition:
Name: str = "Partition1"
Mode: str = "Import"
SourceType: str = "M"
State: str = "Ready"
@dataclass
class MockRelationship:
Name: str = "rel1"
FromTable: Any = None
FromColumn: Any = None
ToTable: Any = None
ToColumn: Any = None
CrossFilteringBehavior: str = "OneDirection"
IsActive: bool = True
@dataclass
class MockHierarchy:
Name: str = "DateHierarchy"
Description: str = ""
Levels: Any = None
def __post_init__(self) -> None:
if self.Levels is None:
self.Levels = MockCollection()
@dataclass
class MockLevel:
Name: str = "Year"
Ordinal: int = 0
Column: Any = None
@dataclass
class MockRole:
Name: str = "Reader"
Description: str = ""
ModelPermission: str = "Read"
TablePermissions: Any = None
def __post_init__(self) -> None:
if self.TablePermissions is None:
self.TablePermissions = MockCollection()
@dataclass
class MockPerspective:
Name: str = "Sales View"
Description: str = ""
@dataclass
class MockExpression:
Name: str = "ServerURL"
Kind: str = "M"
Expression: str = '"https://example.com"'
Description: str = ""
@dataclass
class MockCulture:
Name: str = "en-US"
class MockTable:
"""Simulates a TOM Table with nested collections."""
def __init__(
self,
name: str = "Sales",
data_category: str = "",
description: str = "",
) -> None:
self.Name = name
self.DataCategory = data_category
self.Description = description
self.IsHidden = False
self.CalculationGroup = None
self.Measures = MockCollection([MockMeasure()])
self.Columns = MockCollection([MockColumn()])
self.Partitions = MockCollection([MockPartition()])
self.Hierarchies = MockCollection()
class MockModel:
"""Simulates a TOM Model."""
def __init__(self) -> None:
self.Name = "TestModel"
self.Description = ""
self.DefaultMode = "Import"
self.Culture = "en-US"
self.CompatibilityLevel = 1600
self.Tables = MockCollection([MockTable()])
self.Relationships = MockCollection()
self.Roles = MockCollection()
self.Perspectives = MockCollection()
self.Expressions = MockCollection()
self.Cultures = MockCollection()
def SaveChanges(self) -> None:
pass
def RequestRefresh(self, refresh_type: Any) -> None:
pass
class MockDatabase:
"""Simulates a TOM Database."""
def __init__(self, model: MockModel | None = None) -> None:
self.Name = "TestDB"
self.ID = "TestDB-ID"
self.CompatibilityLevel = 1600
self.LastUpdate = "2026-01-01"
self.Model = model or MockModel()
class MockServer:
"""Simulates a TOM Server."""
def __init__(self, database: MockDatabase | None = None) -> None:
db = database or MockDatabase()
self.Databases = MockCollection([db])
self.Traces = MockCollection()
def Connect(self, conn_str: str) -> None:
pass
def Disconnect(self) -> None:
pass
def BeginTransaction(self) -> str:
return "tx-001"
def CommitTransaction(self, tx_id: str = "") -> None:
pass
def RollbackTransaction(self, tx_id: str = "") -> None:
pass
class MockAdomdConnection:
"""Simulates an AdomdConnection."""
def Open(self) -> None:
pass
def Close(self) -> None:
pass
def build_mock_session() -> Any:
"""Build a complete mock Session for testing."""
from pbi_cli.core.session import Session
model = MockModel()
database = MockDatabase(model)
server = MockServer(database)
adomd = MockAdomdConnection()
return Session(
server=server,
database=database,
model=model,
adomd_connection=adomd,
connection_name="test-conn",
data_source="localhost:12345",
)
# ---------------------------------------------------------------------------
@ -107,28 +253,31 @@ class MockPbiMcpClient:
@pytest.fixture
def mock_client() -> MockPbiMcpClient:
"""A fresh mock MCP client."""
return MockPbiMcpClient()
def mock_session() -> Any:
"""A fresh mock session."""
return build_mock_session()
@pytest.fixture
def patch_get_client(
monkeypatch: pytest.MonkeyPatch, mock_client: MockPbiMcpClient
) -> MockPbiMcpClient:
"""Monkeypatch get_client in _helpers and connection modules."""
factory = lambda repl_mode=False: mock_client # noqa: E731
monkeypatch.setattr("pbi_cli.commands._helpers.get_client", factory)
monkeypatch.setattr("pbi_cli.commands.connection.get_client", factory)
# Also patch dax.py which calls get_client directly
monkeypatch.setattr("pbi_cli.commands.dax.get_client", factory)
# Skip auto-setup (binary download + skills install) in tests
def patch_session(monkeypatch: pytest.MonkeyPatch, mock_session: Any) -> Any:
"""Monkeypatch get_session_for_command to return mock session."""
monkeypatch.setattr(
"pbi_cli.core.session.get_session_for_command",
lambda ctx: mock_session,
)
# Also patch modules that import get_session_for_command at module level
monkeypatch.setattr(
"pbi_cli.commands.model.get_session_for_command",
lambda ctx: mock_session,
)
# Also patch connection commands that call session.connect directly
monkeypatch.setattr(
"pbi_cli.core.session.connect",
lambda data_source, catalog="": mock_session,
)
# Skip skill install in connect
monkeypatch.setattr("pbi_cli.commands.connection._ensure_ready", lambda: None)
return mock_client
return mock_session
@pytest.fixture

View file

@ -1,98 +0,0 @@
"""Tests for pbi_cli.core.binary_manager."""
from __future__ import annotations
import os
from pathlib import Path
from unittest.mock import patch
import pytest
from pbi_cli.core.binary_manager import (
_binary_source,
_find_managed_binary,
get_binary_info,
resolve_binary,
)
def test_resolve_binary_env_var(tmp_path: Path, monkeypatch: pytest.MonkeyPatch) -> None:
fake_bin = tmp_path / "powerbi-modeling-mcp.exe"
fake_bin.write_text("fake", encoding="utf-8")
monkeypatch.setenv("PBI_MCP_BINARY", str(fake_bin))
result = resolve_binary()
assert result == fake_bin
def test_resolve_binary_env_var_missing_file(monkeypatch: pytest.MonkeyPatch) -> None:
monkeypatch.setenv("PBI_MCP_BINARY", "/nonexistent/path")
with pytest.raises(FileNotFoundError, match="non-existent"):
resolve_binary()
def test_resolve_binary_not_found(tmp_config: Path, monkeypatch: pytest.MonkeyPatch) -> None:
monkeypatch.delenv("PBI_MCP_BINARY", raising=False)
with (
patch("pbi_cli.core.binary_manager.find_vscode_extension_binary", return_value=None),
patch("pbi_cli.core.binary_manager.PBI_CLI_HOME", tmp_config),
):
with pytest.raises(FileNotFoundError, match="not found"):
resolve_binary()
def test_find_managed_binary(tmp_config: Path) -> None:
bin_dir = tmp_config / "bin" / "0.4.0"
bin_dir.mkdir(parents=True)
fake_bin = bin_dir / "powerbi-modeling-mcp.exe"
fake_bin.write_text("fake", encoding="utf-8")
with (
patch("pbi_cli.core.binary_manager.PBI_CLI_HOME", tmp_config),
patch("pbi_cli.core.binary_manager.binary_name", return_value="powerbi-modeling-mcp.exe"),
):
result = _find_managed_binary()
assert result is not None
assert result.name == "powerbi-modeling-mcp.exe"
def test_find_managed_binary_empty_dir(tmp_config: Path) -> None:
bin_dir = tmp_config / "bin"
bin_dir.mkdir(parents=True)
with patch("pbi_cli.core.binary_manager.PBI_CLI_HOME", tmp_config):
result = _find_managed_binary()
assert result is None
def test_binary_source_env_var(monkeypatch: pytest.MonkeyPatch) -> None:
monkeypatch.setenv("PBI_MCP_BINARY", "/some/path")
result = _binary_source(Path("/some/path"))
assert "environment variable" in result
def test_binary_source_managed() -> None:
with patch.dict(os.environ, {}, clear=False):
if "PBI_MCP_BINARY" in os.environ:
del os.environ["PBI_MCP_BINARY"]
result = _binary_source(Path("/home/user/.pbi-cli/bin/0.4.0/binary"))
assert "managed" in result
def test_binary_source_vscode() -> None:
with patch.dict(os.environ, {}, clear=False):
if "PBI_MCP_BINARY" in os.environ:
del os.environ["PBI_MCP_BINARY"]
result = _binary_source(Path("/home/user/.vscode/extensions/ext/server/binary"))
assert "VS Code" in result
def test_get_binary_info_not_found(tmp_config: Path, monkeypatch: pytest.MonkeyPatch) -> None:
monkeypatch.delenv("PBI_MCP_BINARY", raising=False)
with (
patch("pbi_cli.core.binary_manager.find_vscode_extension_binary", return_value=None),
patch("pbi_cli.core.binary_manager.PBI_CLI_HOME", tmp_config),
):
info = get_binary_info()
assert info["binary_path"] == "not found"
assert info["version"] == "none"

View file

@ -3,27 +3,25 @@
from __future__ import annotations
from pathlib import Path
from typing import Any
from click.testing import CliRunner
from pbi_cli.main import cli
from tests.conftest import MockPbiMcpClient
def test_connect_success(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["connect", "-d", "localhost:54321"])
assert result.exit_code == 0
assert len(patch_get_client.calls) == 1
assert patch_get_client.calls[0][0] == "connection_operations"
def test_connect_json_output(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "connect", "-d", "localhost:54321"])
@ -31,19 +29,9 @@ def test_connect_json_output(
assert "connected" in result.output
def test_connect_fabric(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["connect-fabric", "-w", "My Workspace", "-m", "My Model"])
assert result.exit_code == 0
assert patch_get_client.calls[0][1]["operation"] == "ConnectFabric"
def test_disconnect(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
# First connect, then disconnect
@ -54,7 +42,6 @@ def test_disconnect(
def test_disconnect_no_active_connection(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["disconnect"])
@ -71,7 +58,7 @@ def test_connections_list_empty(
def test_connections_list_json(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
cli_runner.invoke(cli, ["connect", "-d", "localhost:54321"])

View file

@ -3,39 +3,56 @@
from __future__ import annotations
from pathlib import Path
from typing import Any
from unittest.mock import patch
from click.testing import CliRunner
from pbi_cli.main import cli
from tests.conftest import MockPbiMcpClient
def _mock_execute_dax(**kwargs: Any) -> dict:
return {"columns": ["Amount"], "rows": [{"Amount": 42}]}
def _mock_validate_dax(**kwargs: Any) -> dict:
return {"valid": True, "query": kwargs.get("query", "")}
def _mock_clear_cache(**kwargs: Any) -> dict:
return {"status": "cache_cleared"}
def test_dax_execute(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "dax", "execute", "EVALUATE Sales"])
with patch("pbi_cli.core.adomd_backend.execute_dax", side_effect=_mock_execute_dax):
result = cli_runner.invoke(cli, ["--json", "dax", "execute", "EVALUATE Sales"])
assert result.exit_code == 0
assert "42" in result.output
def test_dax_execute_from_file(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
tmp_path: Path,
) -> None:
query_file = tmp_path / "query.dax"
query_file.write_text("EVALUATE Sales", encoding="utf-8")
result = cli_runner.invoke(cli, ["--json", "dax", "execute", "--file", str(query_file)])
with patch("pbi_cli.core.adomd_backend.execute_dax", side_effect=_mock_execute_dax):
result = cli_runner.invoke(
cli, ["--json", "dax", "execute", "--file", str(query_file)]
)
assert result.exit_code == 0
def test_dax_execute_no_query(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["dax", "execute"])
@ -44,18 +61,20 @@ def test_dax_execute_no_query(
def test_dax_validate(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "dax", "validate", "EVALUATE Sales"])
with patch("pbi_cli.core.adomd_backend.validate_dax", side_effect=_mock_validate_dax):
result = cli_runner.invoke(cli, ["--json", "dax", "validate", "EVALUATE Sales"])
assert result.exit_code == 0
assert "isValid" in result.output
assert "valid" in result.output
def test_dax_clear_cache(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "dax", "clear-cache"])
with patch("pbi_cli.core.adomd_backend.clear_cache", side_effect=_mock_clear_cache):
result = cli_runner.invoke(cli, ["--json", "dax", "clear-cache"])
assert result.exit_code == 0

View file

@ -3,16 +3,16 @@
from __future__ import annotations
from pathlib import Path
from typing import Any
from click.testing import CliRunner
from pbi_cli.main import cli
from tests.conftest import MockPbiMcpClient
def test_measure_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "measure", "list"])
@ -22,7 +22,7 @@ def test_measure_list(
def test_measure_list_with_table_filter(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "measure", "list", "--table", "Sales"])
@ -31,111 +31,47 @@ def test_measure_list_with_table_filter(
def test_measure_get(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "measure", "get", "Total Sales", "--table", "Sales"])
result = cli_runner.invoke(
cli, ["--json", "measure", "get", "Total Sales", "--table", "Sales"]
)
assert result.exit_code == 0
assert "Total Sales" in result.output
def test_measure_create(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(
cli,
[
"--json",
"measure",
"create",
"Revenue",
"-e",
"SUM(Sales[Revenue])",
"-t",
"Sales",
],
)
assert result.exit_code == 0
def test_measure_update(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(
cli,
[
"--json",
"measure",
"update",
"Revenue",
"-t",
"Sales",
"-e",
"SUM(Sales[Amount])",
],
["--json", "measure", "create", "Revenue", "-e", "SUM(Sales[Revenue])", "-t", "Sales"],
)
assert result.exit_code == 0
assert "created" in result.output
def test_measure_delete(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(
cli,
[
"--json",
"measure",
"delete",
"Revenue",
"-t",
"Sales",
],
cli, ["--json", "measure", "delete", "Total Sales", "-t", "Sales"]
)
assert result.exit_code == 0
assert "deleted" in result.output
def test_measure_rename(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(
cli,
[
"--json",
"measure",
"rename",
"OldName",
"NewName",
"-t",
"Sales",
],
)
assert result.exit_code == 0
def test_measure_move(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(
cli,
[
"--json",
"measure",
"move",
"Revenue",
"-t",
"Sales",
"--to-table",
"Finance",
],
cli, ["--json", "measure", "rename", "Total Sales", "Revenue", "-t", "Sales"]
)
assert result.exit_code == 0

View file

@ -3,16 +3,16 @@
from __future__ import annotations
from pathlib import Path
from typing import Any
from click.testing import CliRunner
from pbi_cli.main import cli
from tests.conftest import MockPbiMcpClient
def test_column_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "column", "list", "--table", "Sales"])
@ -21,7 +21,7 @@ def test_column_list(
def test_relationship_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "relationship", "list"])
@ -30,7 +30,7 @@ def test_relationship_list(
def test_database_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "database", "list"])
@ -39,7 +39,7 @@ def test_database_list(
def test_security_role_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "security-role", "list"])
@ -48,7 +48,7 @@ def test_security_role_list(
def test_calc_group_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "calc-group", "list"])
@ -57,7 +57,7 @@ def test_calc_group_list(
def test_partition_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "partition", "list", "--table", "Sales"])
@ -66,7 +66,7 @@ def test_partition_list(
def test_perspective_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "perspective", "list"])
@ -75,16 +75,16 @@ def test_perspective_list(
def test_hierarchy_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "hierarchy", "list", "--table", "Date"])
result = cli_runner.invoke(cli, ["--json", "hierarchy", "list"])
assert result.exit_code == 0
def test_expression_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "expression", "list"])
@ -93,7 +93,7 @@ def test_expression_list(
def test_calendar_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "calendar", "list"])
@ -102,7 +102,7 @@ def test_calendar_list(
def test_trace_start(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "trace", "start"])
@ -111,7 +111,7 @@ def test_trace_start(
def test_transaction_begin(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "transaction", "begin"])
@ -120,7 +120,7 @@ def test_transaction_begin(
def test_transaction_commit(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "transaction", "commit"])
@ -129,7 +129,7 @@ def test_transaction_commit(
def test_transaction_rollback(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "transaction", "rollback"])
@ -138,7 +138,7 @@ def test_transaction_rollback(
def test_advanced_culture_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "advanced", "culture", "list"])

View file

@ -3,36 +3,27 @@
from __future__ import annotations
from pathlib import Path
from typing import Any
from click.testing import CliRunner
from pbi_cli.main import cli
from tests.conftest import MockPbiMcpClient
def test_model_get(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "model", "get"])
assert result.exit_code == 0
assert "My Model" in result.output
assert "TestModel" in result.output
def test_model_stats(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "model", "stats"])
assert result.exit_code == 0
def test_model_refresh(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "model", "refresh"])
assert result.exit_code == 0

View file

@ -2,8 +2,6 @@
from __future__ import annotations
from pathlib import Path
import pytest
from click.testing import CliRunner
@ -27,29 +25,24 @@ def test_repl_build_completer() -> None:
assert "repl" in completer.words
def test_repl_get_prompt_no_connection(tmp_connections: Path) -> None:
def test_repl_get_prompt_no_connection() -> None:
repl = PbiRepl()
prompt = repl._get_prompt()
assert prompt == "pbi> "
def test_repl_get_prompt_with_connection(tmp_connections: Path) -> None:
from pbi_cli.core.connection_store import (
ConnectionInfo,
ConnectionStore,
add_connection,
save_connections,
)
def test_repl_get_prompt_with_session(monkeypatch: pytest.MonkeyPatch) -> None:
from tests.conftest import build_mock_session
store = add_connection(
ConnectionStore(),
ConnectionInfo(name="test-conn", data_source="localhost"),
)
save_connections(store)
session = build_mock_session()
monkeypatch.setattr("pbi_cli.core.session._current_session", session)
repl = PbiRepl()
prompt = repl._get_prompt()
assert prompt == "pbi(test-conn)> "
assert "test-conn" in prompt
# Clean up
monkeypatch.setattr("pbi_cli.core.session._current_session", None)
def test_repl_execute_line_empty() -> None:
@ -61,35 +54,16 @@ def test_repl_execute_line_empty() -> None:
def test_repl_execute_line_exit() -> None:
repl = PbiRepl()
import pytest
with pytest.raises(EOFError):
repl._execute_line("exit")
def test_repl_execute_line_quit() -> None:
repl = PbiRepl()
import pytest
with pytest.raises(EOFError):
repl._execute_line("quit")
def test_repl_execute_line_strips_pbi_prefix(
monkeypatch: pytest.MonkeyPatch,
tmp_connections: Path,
) -> None:
from tests.conftest import MockPbiMcpClient
mock = MockPbiMcpClient()
factory = lambda repl_mode=False: mock # noqa: E731
monkeypatch.setattr("pbi_cli.commands._helpers.get_client", factory)
repl = PbiRepl(json_output=True)
# "pbi measure list" should work like "measure list"
repl._execute_line("pbi --json measure list")
def test_repl_execute_line_help() -> None:
repl = PbiRepl()
# --help should not crash the REPL (Click raises SystemExit)

View file

@ -11,32 +11,33 @@ from pbi_cli.main import cli
def test_setup_info(cli_runner: CliRunner, tmp_config: Path) -> None:
fake_info = {
"binary_path": "/test/binary",
"version": "0.4.0",
"platform": "win32-x64",
"source": "managed",
}
with patch("pbi_cli.commands.setup_cmd.get_binary_info", return_value=fake_info):
with patch("pbi_cli.core.dotnet_loader._dll_dir", return_value=tmp_config / "dlls"):
result = cli_runner.invoke(cli, ["--json", "setup", "--info"])
assert result.exit_code == 0
assert "0.4.0" in result.output
assert result.exit_code == 0
assert "version" in result.output
def test_setup_check(cli_runner: CliRunner, tmp_config: Path) -> None:
with patch(
"pbi_cli.commands.setup_cmd.check_for_updates",
return_value=("0.3.0", "0.4.0", True),
def test_setup_verify_missing_pythonnet(cli_runner: CliRunner, tmp_config: Path) -> None:
with (
patch("pbi_cli.core.dotnet_loader._dll_dir", return_value=tmp_config / "dlls"),
patch.dict("sys.modules", {"pythonnet": None}),
):
result = cli_runner.invoke(cli, ["--json", "setup", "--check"])
assert result.exit_code == 0
assert "0.4.0" in result.output
result = cli_runner.invoke(cli, ["setup"])
# Should fail because pythonnet is "missing" and dlls dir doesn't exist
assert result.exit_code != 0
def test_setup_check_up_to_date(cli_runner: CliRunner, tmp_config: Path) -> None:
with patch(
"pbi_cli.commands.setup_cmd.check_for_updates",
return_value=("0.4.0", "0.4.0", False),
def test_setup_verify_success(cli_runner: CliRunner, tmp_config: Path) -> None:
# Create fake DLL directory with required files
dll_dir = tmp_config / "dlls"
dll_dir.mkdir()
(dll_dir / "Microsoft.AnalysisServices.Tabular.dll").write_text("fake")
(dll_dir / "Microsoft.AnalysisServices.AdomdClient.dll").write_text("fake")
with (
patch("pbi_cli.core.dotnet_loader._dll_dir", return_value=dll_dir),
patch("pbi_cli.commands.connection._ensure_ready", lambda: None),
):
result = cli_runner.invoke(cli, ["setup", "--check"])
assert result.exit_code == 0
result = cli_runner.invoke(cli, ["--json", "setup"])
assert result.exit_code == 0
assert "ready" in result.output

View file

@ -3,16 +3,16 @@
from __future__ import annotations
from pathlib import Path
from typing import Any
from click.testing import CliRunner
from pbi_cli.main import cli
from tests.conftest import MockPbiMcpClient
def test_table_list(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "table", "list"])
@ -22,45 +22,18 @@ def test_table_list(
def test_table_get(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "table", "get", "Sales"])
assert result.exit_code == 0
def test_table_create(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(
cli,
[
"--json",
"table",
"create",
"NewTable",
"--mode",
"Import",
],
)
assert result.exit_code == 0
def test_table_delete(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
patch_session: Any,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "table", "delete", "OldTable"])
assert result.exit_code == 0
def test_table_refresh(
cli_runner: CliRunner,
patch_get_client: MockPbiMcpClient,
tmp_connections: Path,
) -> None:
result = cli_runner.invoke(cli, ["--json", "table", "refresh", "Sales"])
result = cli_runner.invoke(cli, ["--json", "table", "delete", "Sales"])
assert result.exit_code == 0
assert "deleted" in result.output

View file

@ -9,40 +9,28 @@ from pbi_cli.core.config import PbiConfig, load_config, save_config
def test_default_config() -> None:
config = PbiConfig()
assert config.binary_version == ""
assert config.binary_path == ""
assert config.default_connection == ""
assert config.binary_args == ["--start", "--skipconfirmation"]
def test_with_updates_returns_new_instance() -> None:
original = PbiConfig(binary_version="1.0")
updated = original.with_updates(binary_version="2.0")
original = PbiConfig(default_connection="conn1")
updated = original.with_updates(default_connection="conn2")
assert updated.binary_version == "2.0"
assert original.binary_version == "1.0" # unchanged
def test_with_updates_preserves_other_fields() -> None:
original = PbiConfig(binary_version="1.0", binary_path="/bin/test")
updated = original.with_updates(binary_version="2.0")
assert updated.binary_path == "/bin/test"
assert updated.default_connection == "conn2"
assert original.default_connection == "conn1" # unchanged
def test_load_config_missing_file(tmp_config: Path) -> None:
config = load_config()
assert config.binary_version == ""
assert config.binary_args == ["--start", "--skipconfirmation"]
assert config.default_connection == ""
def test_save_and_load_roundtrip(tmp_config: Path) -> None:
original = PbiConfig(binary_version="0.4.0", binary_path="/test/path")
original = PbiConfig(default_connection="my-conn")
save_config(original)
loaded = load_config()
assert loaded.binary_version == "0.4.0"
assert loaded.binary_path == "/test/path"
assert loaded.default_connection == "my-conn"
def test_load_config_corrupt_json(tmp_config: Path) -> None:
@ -50,13 +38,13 @@ def test_load_config_corrupt_json(tmp_config: Path) -> None:
config_file.write_text("not valid json{{{", encoding="utf-8")
config = load_config()
assert config.binary_version == "" # falls back to defaults
assert config.default_connection == "" # falls back to defaults
def test_config_is_frozen() -> None:
config = PbiConfig()
try:
config.binary_version = "new" # type: ignore[misc]
config.default_connection = "new" # type: ignore[misc]
assert False, "Should have raised"
except AttributeError:
pass

View file

@ -1,6 +1,6 @@
"""End-to-end tests requiring the real Power BI MCP binary.
"""End-to-end tests requiring a running Power BI Desktop instance.
These tests are skipped in CI unless a binary is available.
These tests are skipped in CI unless PBI Desktop is available.
Run with: pytest -m e2e
"""
@ -24,14 +24,6 @@ def _pbi(*args: str) -> subprocess.CompletedProcess[str]:
)
@pytest.fixture(autouse=True)
def _skip_if_no_binary() -> None:
"""Skip all e2e tests if the binary is not available."""
result = _pbi("--json", "setup", "--info")
if "not found" in result.stdout:
pytest.skip("Power BI MCP binary not available")
def test_version() -> None:
result = _pbi("--version")
assert result.returncode == 0
@ -47,3 +39,4 @@ def test_help() -> None:
def test_setup_info() -> None:
result = _pbi("--json", "setup", "--info")
assert result.returncode == 0
assert "version" in result.stdout

View file

@ -5,10 +5,10 @@ from __future__ import annotations
import click
from pbi_cli.core.errors import (
BinaryNotFoundError,
ConnectionRequiredError,
McpToolError,
DotNetNotFoundError,
PbiCliError,
TomError,
)
@ -18,9 +18,9 @@ def test_pbi_cli_error_is_click_exception() -> None:
assert err.format_message() == "test message"
def test_binary_not_found_default_message() -> None:
err = BinaryNotFoundError()
assert "pbi connect" in err.format_message()
def test_dotnet_not_found_default_message() -> None:
err = DotNetNotFoundError()
assert "pythonnet" in err.format_message()
def test_connection_required_default_message() -> None:
@ -28,9 +28,9 @@ def test_connection_required_default_message() -> None:
assert "pbi connect" in err.format_message()
def test_mcp_tool_error_includes_tool_name() -> None:
err = McpToolError("measure_operations", "not found")
assert "measure_operations" in err.format_message()
def test_tom_error_includes_operation() -> None:
err = TomError("measure_list", "not found")
assert "measure_list" in err.format_message()
assert "not found" in err.format_message()
assert err.tool_name == "measure_operations"
assert err.operation == "measure_list"
assert err.detail == "not found"

View file

@ -4,10 +4,9 @@ from __future__ import annotations
import pytest
from pbi_cli.commands._helpers import build_definition, run_tool
from pbi_cli.core.errors import McpToolError
from pbi_cli.commands._helpers import build_definition, run_command
from pbi_cli.core.errors import TomError
from pbi_cli.main import PbiContext
from tests.conftest import MockPbiMcpClient
def test_build_definition_required_only() -> None:
@ -37,89 +36,37 @@ def test_build_definition_preserves_falsy_non_none() -> None:
assert result["label"] == ""
def test_run_tool_adds_connection(monkeypatch: pytest.MonkeyPatch) -> None:
mock = MockPbiMcpClient()
monkeypatch.setattr("pbi_cli.commands._helpers.get_client", lambda repl_mode=False: mock)
# Mock connection store with the named connection
from pbi_cli.core.connection_store import ConnectionInfo, ConnectionStore
store = ConnectionStore(
last_used="my-conn",
connections={"my-conn": ConnectionInfo(name="my-conn", data_source="localhost:12345")},
)
monkeypatch.setattr(
"pbi_cli.core.connection_store.load_connections",
lambda: store,
)
ctx = PbiContext(json_output=True, connection="my-conn")
run_tool(ctx, "measure_operations", {"operation": "List"})
# First call is auto-reconnect (Connect), second is the actual tool call.
# The connectionName comes from the server response ("test-conn"), not our saved name.
assert mock.calls[1][1]["connectionName"] == "test-conn"
def test_run_tool_no_connection(monkeypatch: pytest.MonkeyPatch) -> None:
mock = MockPbiMcpClient()
monkeypatch.setattr("pbi_cli.commands._helpers.get_client", lambda repl_mode=False: mock)
# Ensure no last-used connection is found
from pbi_cli.core.connection_store import ConnectionStore
monkeypatch.setattr(
"pbi_cli.core.connection_store.load_connections",
lambda: ConnectionStore(),
)
def test_run_command_formats_result() -> None:
ctx = PbiContext(json_output=True)
run_tool(ctx, "measure_operations", {"operation": "List"})
assert "connectionName" not in mock.calls[0][1]
result = run_command(ctx, lambda: {"status": "ok"})
assert result == {"status": "ok"}
def test_run_tool_stops_client_in_oneshot(monkeypatch: pytest.MonkeyPatch) -> None:
mock = MockPbiMcpClient()
monkeypatch.setattr("pbi_cli.commands._helpers.get_client", lambda repl_mode=False: mock)
from pbi_cli.core.connection_store import ConnectionStore
monkeypatch.setattr(
"pbi_cli.core.connection_store.load_connections",
lambda: ConnectionStore(),
)
def test_run_command_exits_on_error_oneshot() -> None:
ctx = PbiContext(json_output=True, repl_mode=False)
run_tool(ctx, "measure_operations", {"operation": "List"})
assert mock.stopped is True
def failing_fn() -> None:
raise RuntimeError("boom")
with pytest.raises(SystemExit):
run_command(ctx, failing_fn)
def test_run_tool_keeps_client_in_repl(monkeypatch: pytest.MonkeyPatch) -> None:
mock = MockPbiMcpClient()
monkeypatch.setattr("pbi_cli.commands._helpers.get_client", lambda repl_mode=False: mock)
def test_run_command_raises_tom_error_in_repl() -> None:
ctx = PbiContext(json_output=True, repl_mode=True)
run_tool(ctx, "measure_operations", {"operation": "List"})
assert mock.stopped is False
def failing_fn() -> None:
raise RuntimeError("boom")
with pytest.raises(TomError):
run_command(ctx, failing_fn)
def test_run_tool_raises_mcp_tool_error_on_failure(
monkeypatch: pytest.MonkeyPatch,
) -> None:
class FailingClient(MockPbiMcpClient):
def call_tool(self, tool_name: str, request: dict) -> None:
raise RuntimeError("server crashed")
mock = FailingClient()
monkeypatch.setattr("pbi_cli.commands._helpers.get_client", lambda repl_mode=False: mock)
from pbi_cli.core.connection_store import ConnectionStore
monkeypatch.setattr(
"pbi_cli.core.connection_store.load_connections",
lambda: ConnectionStore(),
)
def test_run_command_passes_kwargs() -> None:
ctx = PbiContext(json_output=True)
with pytest.raises(McpToolError):
run_tool(ctx, "measure_operations", {"operation": "List"})
def fn_with_args(name: str, count: int) -> dict:
return {"name": name, "count": count}
result = run_command(ctx, fn_with_args, name="test", count=42)
assert result == {"name": "test", "count": 42}

View file

@ -1,83 +0,0 @@
"""Tests for pbi_cli.core.mcp_client (unit-level, no real server)."""
from __future__ import annotations
from pbi_cli.core.mcp_client import (
_extract_text,
_parse_content,
get_client,
get_shared_client,
)
# ---------------------------------------------------------------------------
# _parse_content tests
# ---------------------------------------------------------------------------
class FakeTextContent:
"""Mimics mcp TextContent blocks."""
def __init__(self, text: str) -> None:
self.text = text
def test_parse_content_single_json() -> None:
blocks = [FakeTextContent('{"name": "Sales"}')]
result = _parse_content(blocks)
assert result == {"name": "Sales"}
def test_parse_content_single_plain_text() -> None:
blocks = [FakeTextContent("just a string")]
result = _parse_content(blocks)
assert result == "just a string"
def test_parse_content_multiple_blocks() -> None:
blocks = [FakeTextContent("hello"), FakeTextContent(" world")]
result = _parse_content(blocks)
assert result == "hello\n world"
def test_parse_content_non_list() -> None:
result = _parse_content("raw value")
assert result == "raw value"
def test_parse_content_json_array() -> None:
blocks = [FakeTextContent('[{"a": 1}]')]
result = _parse_content(blocks)
assert result == [{"a": 1}]
# ---------------------------------------------------------------------------
# _extract_text tests
# ---------------------------------------------------------------------------
def test_extract_text_from_blocks() -> None:
blocks = [FakeTextContent("error occurred")]
result = _extract_text(blocks)
assert result == "error occurred"
def test_extract_text_non_list() -> None:
result = _extract_text("plain error")
assert result == "plain error"
# ---------------------------------------------------------------------------
# get_client / get_shared_client tests
# ---------------------------------------------------------------------------
def test_get_client_oneshot_returns_fresh() -> None:
c1 = get_client(repl_mode=False)
c2 = get_client(repl_mode=False)
assert c1 is not c2
def test_get_shared_client_returns_same_instance() -> None:
c1 = get_shared_client()
c2 = get_shared_client()
assert c1 is c2

View file

@ -3,14 +3,13 @@
from __future__ import annotations
import json
import sys
from io import StringIO
from pbi_cli.core.output import format_mcp_result, print_json
from pbi_cli.core.output import format_result, print_json
def test_print_json_outputs_valid_json(capsys: object) -> None:
import sys
from io import StringIO
def test_print_json_outputs_valid_json() -> None:
old_stdout = sys.stdout
sys.stdout = buf = StringIO()
try:
@ -22,9 +21,7 @@ def test_print_json_outputs_valid_json(capsys: object) -> None:
assert parsed == {"key": "value"}
def test_print_json_handles_non_serializable(capsys: object) -> None:
import sys
from io import StringIO
def test_print_json_handles_non_serializable() -> None:
from pathlib import Path
old_stdout = sys.stdout
@ -38,14 +35,11 @@ def test_print_json_handles_non_serializable(capsys: object) -> None:
assert "tmp" in parsed["path"]
def test_format_mcp_result_json_mode(capsys: object) -> None:
import sys
from io import StringIO
def test_format_result_json_mode() -> None:
old_stdout = sys.stdout
sys.stdout = buf = StringIO()
try:
format_mcp_result({"name": "Sales"}, json_output=True)
format_result({"name": "Sales"}, json_output=True)
finally:
sys.stdout = old_stdout
@ -53,21 +47,21 @@ def test_format_mcp_result_json_mode(capsys: object) -> None:
assert parsed["name"] == "Sales"
def test_format_mcp_result_empty_list() -> None:
def test_format_result_empty_list() -> None:
# Should not raise; prints "No results." to stderr
format_mcp_result([], json_output=False)
format_result([], json_output=False)
def test_format_mcp_result_dict() -> None:
def test_format_result_dict() -> None:
# Should not raise; prints key-value panel
format_mcp_result({"name": "Test"}, json_output=False)
format_result({"name": "Test"}, json_output=False)
def test_format_mcp_result_list_of_dicts() -> None:
def test_format_result_list_of_dicts() -> None:
# Should not raise; prints table
format_mcp_result([{"name": "A"}, {"name": "B"}], json_output=False)
format_result([{"name": "A"}, {"name": "B"}], json_output=False)
def test_format_mcp_result_string() -> None:
def test_format_result_string() -> None:
# Should not raise; prints string
format_mcp_result("some text", json_output=False)
format_result("some text", json_output=False)

View file

@ -9,105 +9,10 @@ import pytest
from pbi_cli.utils.platform import (
_workspace_candidates,
binary_name,
detect_platform,
discover_pbi_port,
ensure_executable,
find_vscode_extension_binary,
)
def test_detect_platform_windows() -> None:
with (
patch("pbi_cli.utils.platform.platform.system", return_value="Windows"),
patch("pbi_cli.utils.platform.platform.machine", return_value="AMD64"),
):
assert detect_platform() == "win32-x64"
def test_detect_platform_macos_arm() -> None:
with (
patch("pbi_cli.utils.platform.platform.system", return_value="Darwin"),
patch("pbi_cli.utils.platform.platform.machine", return_value="arm64"),
):
assert detect_platform() == "darwin-arm64"
def test_detect_platform_linux_x64() -> None:
with (
patch("pbi_cli.utils.platform.platform.system", return_value="Linux"),
patch("pbi_cli.utils.platform.platform.machine", return_value="x86_64"),
):
assert detect_platform() == "linux-x64"
def test_detect_platform_unsupported() -> None:
with (
patch("pbi_cli.utils.platform.platform.system", return_value="FreeBSD"),
patch("pbi_cli.utils.platform.platform.machine", return_value="sparc"),
):
with pytest.raises(ValueError, match="Unsupported platform"):
detect_platform()
def test_binary_name_windows() -> None:
with patch("pbi_cli.utils.platform.platform.system", return_value="Windows"):
assert binary_name() == "powerbi-modeling-mcp.exe"
def test_binary_name_unix() -> None:
with patch("pbi_cli.utils.platform.platform.system", return_value="Linux"):
assert binary_name() == "powerbi-modeling-mcp"
def test_binary_name_unsupported() -> None:
with patch("pbi_cli.utils.platform.platform.system", return_value="FreeBSD"):
with pytest.raises(ValueError, match="Unsupported OS"):
binary_name()
def test_ensure_executable_noop_on_windows(tmp_path: Path) -> None:
f = tmp_path / "test.exe"
f.write_text("fake", encoding="utf-8")
with patch("pbi_cli.utils.platform.platform.system", return_value="Windows"):
ensure_executable(f) # should be a no-op
def test_find_vscode_extension_binary_no_dir(tmp_path: Path) -> None:
with patch("pbi_cli.utils.platform.Path.home", return_value=tmp_path):
result = find_vscode_extension_binary()
assert result is None
def test_find_vscode_extension_binary_no_match(tmp_path: Path) -> None:
ext_dir = tmp_path / ".vscode" / "extensions"
ext_dir.mkdir(parents=True)
with patch("pbi_cli.utils.platform.Path.home", return_value=tmp_path):
result = find_vscode_extension_binary()
assert result is None
def test_find_vscode_extension_binary_found(tmp_path: Path) -> None:
ext_name = "analysis-services.powerbi-modeling-mcp-0.4.0"
server_dir = tmp_path / ".vscode" / "extensions" / ext_name / "server"
server_dir.mkdir(parents=True)
fake_bin = server_dir / "powerbi-modeling-mcp.exe"
fake_bin.write_text("fake", encoding="utf-8")
with (
patch("pbi_cli.utils.platform.Path.home", return_value=tmp_path),
patch("pbi_cli.utils.platform.binary_name", return_value="powerbi-modeling-mcp.exe"),
):
result = find_vscode_extension_binary()
assert result is not None
assert result.name == "powerbi-modeling-mcp.exe"
# ---------------------------------------------------------------------------
# discover_pbi_port tests
# ---------------------------------------------------------------------------
def test_discover_pbi_port_no_pbi(monkeypatch: pytest.MonkeyPatch, tmp_path: Path) -> None:
"""Returns None when Power BI Desktop workspace dir doesn't exist."""
with (
@ -198,7 +103,6 @@ def test_discover_pbi_port_store_version(tmp_path: Path, monkeypatch: pytest.Mon
store_ws = tmp_path / "Microsoft" / "Power BI Desktop Store App" / "AnalysisServicesWorkspaces"
data_dir = store_ws / "AnalysisServicesWorkspace_xyz" / "Data"
data_dir.mkdir(parents=True)
# Store version writes UTF-16 LE without BOM
(data_dir / "msmdsrv.port.txt").write_bytes("57426".encode("utf-16-le"))
with (