microsoft-fabric logo
microsoft-fabric · MCP Server

Microsoft Fabric + Improvado MCP — Data Platform, One Question Away

Improvado's MCP server connects Microsoft Fabric to your AI agent. Query lakehouses, data warehouses, pipeline runs, semantic models, and OneLake data in plain English. Works with Claude, ChatGPT, Cursor, and any MCP-compatible tool.

46K+ metrics ·Read & Write access ·500+ platforms ·<60s setup
Read

Read: Query Any Fabric Asset Without Writing SQL

Ask your AI agent about lakehouse tables, warehouse metrics, pipeline run status, or semantic model values — across workspaces and domains. The MCP server translates natural language into Fabric API and SQL endpoint calls.

Example prompts

"What's the row count and last refresh time for our top 10 OneLake tables by query frequency?"

30 min → 30 sec

"Show me all pipeline runs that failed in the last 7 days. Include error message and affected workspace."

20 min → 20 sec

"Compare query performance across all warehouses — average execution time and resource consumption by workspace."

2 hrs → 2 min
Works with Claude ChatGPT Cursor +5
Write

Write: Trigger Pipelines and Manage Workspaces

Your AI agent doesn't just read Fabric data — it acts on it. Trigger pipeline refreshes, update workspace configurations, manage capacity assignments, and initiate data ingestion jobs through natural language commands.

Example prompts

"Trigger a refresh for all semantic models in the Marketing workspace that haven't updated in the last 24 hours."

30 min → 2 min

"Reassign the three lowest-priority lakehouses to the Reserve capacity tier to free up F64 resources."

45 min → 5 min

"Create a new workspace for the Q2 Finance reporting project and assign the appropriate security roles."

1.5 hrs → 10 min
Every action logged · Fully reversible · SOC 2 certified
Monitor

Monitor: Pipeline Failures and Capacity Anomalies

Set up your AI agent to watch Microsoft Fabric continuously. Get alerts when pipelines fail, capacity consumption spikes, semantic model refreshes fall behind schedule, or OneLake data goes stale.

Example prompts

"Alert me when any production pipeline fails more than twice in a 24-hour window."

Manual → auto

"Daily: send a summary of all pipeline run statuses, failed refreshes, and capacity utilization by workspace."

2 hrs → auto

"Flag any semantic model used in a live Power BI report that hasn't refreshed in over 48 hours."

Manual → auto
Alerts sent to Slack, email, or your AI agent
Full cycle

The Closed Loop: Read → Decide → Write → Monitor

Your AI agent doesn't just surface data — it acts. Adjust pricing, update product descriptions, manage inventory, apply discounts — all through natural language. The MCP server translates intent into API operations.

Every phase runs through the same MCP connection. One protocol, all platforms, full governance. No switching between tools.

Ideate
Launch
Measure
Analyze
Report
Iterate

One conversation. All six phases. Every platform.

The daily grind

Common problems. Direct answers.

Challenge 1

Cross-Workspace Visibility Requires Admin Access

The problem

Organizations running multiple Fabric workspaces — by department, region, or business unit — have no single view of pipeline health, data freshness, or capacity consumption. Getting a consolidated status means navigating each workspace separately or building a custom monitoring dashboard that nobody maintains.

How MCP solves it

Improvado normalizes Fabric metadata across all connected workspaces into one queryable layer. Ask the MCP server for cross-workspace pipeline status, capacity utilization, or data freshness — one question, all workspaces, one answer.

Try asking
Show me pipeline failure rate and average refresh latency across all workspaces for the last 7 days.
Answer in seconds
All data sources, one query
Challenge 2

Fabric Data and Marketing Metrics Don't Connect

The problem

Data teams store marketing performance data in Fabric lakehouses and warehouses — but joining that data with live ad platform metrics to answer business questions requires either scheduled exports from ad platforms into Fabric, or custom pipelines that break regularly. The business never has a truly current view.

How MCP solves it

Improvado connects Fabric to 1,000+ marketing data sources in one normalized model. The MCP server lets the AI agent answer questions combining Fabric warehouse data with live Google Ads, Meta, LinkedIn, and other platform metrics — without intermediate pipelines.

Try asking
Join our Fabric customer LTV data with last month's Google Ads spend by campaign. Which campaigns had the highest LTV customers?
Full detail preserved
No data loss on export
Challenge 3

Capacity Planning Is Reactive, Not Proactive

The problem

Fabric capacity overruns cause query throttling and pipeline failures. Most teams only discover capacity pressure after something breaks — because monitoring CU consumption across workspaces and correlating it with job schedules requires custom queries and manual analysis that no one prioritizes.

How MCP solves it

Ask the MCP server for capacity consumption trends, peak usage patterns, and workloads approaching limits. The AI agent identifies which workspaces and jobs are consuming the most resources and suggests rebalancing — before overruns occur.

Try asking
Which workspaces are consuming the most Fabric CUs this week? Show usage trend and flag anything approaching 80% of allocated capacity.
Unified data model
Compare anything side by side
👥 Teams

One Framework. Five Roles. Zero Setup.

Same MCP connection, different workflows for every team member. Each role asks in natural language — the MCP server handles the complexity (rate limits, auth, schema normalization, governance) behind the scenes.

Agency CEO
Portfolio health. Client risk. Revenue signals.
Media Strategist
70% strategy, not 70% ops. Auto campaign QA.
Marketing Analyst
Zero wrangling. Cross-platform. AI narratives.
Account Manager
QBR decks auto-generated. Call prep in 30s.
Creative Director
Performance-to-brief. Predict winners before spend.
FAQ

Common questions

Does Microsoft Fabric have an official MCP server?

Microsoft has released MCP-compatible tooling for Fabric as part of its Copilot ecosystem, primarily for use within the Fabric portal. Improvado provides a hosted MCP server that connects Fabric data to any MCP-compatible AI tool — including Claude, ChatGPT, and Cursor — outside of Microsoft's native environment.

What Microsoft Fabric data is available through the MCP server?

Lakehouse tables, data warehouse schemas and query results, pipeline run history and status, semantic model metadata and refresh history, workspace configurations, OneLake inventory, and capacity consumption metrics. Improvado normalizes the Fabric REST API and SQL analytics endpoints.

Can the Fabric MCP server combine Fabric data with external marketing data sources?

Yes. Improvado connects Microsoft Fabric to 1,000+ marketing and analytics data sources. The MCP server can answer questions that join Fabric warehouse data with live ad platform metrics, CRM data, or any other connected source — in one query.

Which AI tools work with the Microsoft Fabric MCP server?

Any tool supporting the Model Context Protocol: Claude Desktop, ChatGPT, Cursor, Windsurf, Gemini, and custom applications using the MCP HTTP transport. Improvado's server works outside of Microsoft's Copilot environment — giving teams flexibility in which AI tool they use.

Is my Microsoft Fabric data secure through the MCP server?

Yes. Improvado is SOC 2 Type II certified. Azure OAuth credentials and service principal tokens are stored in an encrypted vault. All queries run through Improvado's secure proxy — credentials are never passed to the AI tool directly.

How quickly can data teams start querying Microsoft Fabric with AI?

If Microsoft Fabric is already connected in Improvado, the MCP server is ready immediately. For Claude Desktop or Cursor, add one configuration line. For new accounts, Azure service principal authentication typically completes in under 20 minutes.

Stop Reporting. Start Executing.

Connect your data to an AI agent in under 60 seconds. The closed loop starts with one conversation.

SOC 2 Type II GDPR 500+ Platforms