LaunchDarkly logo
launchdarkly · MCP Server

LaunchDarkly + Improvado MCP — Feature Flags, Instantly Queryable

Improvado gives your AI agent direct access to LaunchDarkly flag configurations, targeting rules, rollout percentages, and experiment results. Ask about any flag across any environment without opening the LaunchDarkly dashboard. Works with Claude, Cursor, and any MCP-compatible tool.

46K+ metrics ·Read & Write access ·500+ platforms ·<60s setup
Read

Read: Query Any Flag State in Seconds

Stop hunting through environments and projects to check flag status. Ask your AI agent which flags are live, what targeting rules are active, and which experiments are running. The MCP server handles LaunchDarkly API calls across all projects and environments.

Example prompts

"Which feature flags are enabled for enterprise accounts but disabled for free tier users right now?"

15 min → 20 sec

"Show me all flags that have been active for over 90 days and are still set to 100% rollout."

20 min → 30 sec

"List every flag modified in production in the last 48 hours. Include who changed it and what changed."

30 min → 45 sec
Works with Claude ChatGPT Cursor +5
Write

Write: Update Flags Without Touching the Dashboard

Your AI agent can toggle flags, update targeting rules, and adjust rollout percentages directly. Routine flag operations that require navigating multiple screens happen in a single prompt.

Example prompts

"Enable the new-checkout-flow flag for the beta segment in the staging environment."

8 min → 20 sec

"Roll back the payment-redesign flag to 0% in production. Something is wrong."

5 min → 10 sec

"Archive all flags tagged 'deprecated' that haven't been evaluated in 30 days."

45 min → 1 min
Every action logged · Fully reversible · SOC 2 certified
Monitor

Monitor: Catch Flag Drift and Rollout Anomalies Early

Set your AI agent to watch for stale flags, sudden targeting changes, and experiment results drifting outside expected ranges. Know before users notice.

Example prompts

"Alert me if any production flag is rolled back to 0% outside of a scheduled maintenance window."

Manual → auto

"Every Friday: list all flags created this week that are still in draft state and haven't been reviewed."

2 hrs → auto

"Flag any experiment where conversion rate variance exceeds 15% between control and treatment."

Manual → auto
Alerts sent to Slack, email, or your AI agent
Full cycle

The Closed Loop: Read → Decide → Write → Monitor

Your AI agent doesn't just surface data — it acts. Adjust pricing, update product descriptions, manage inventory, apply discounts — all through natural language. The MCP server translates intent into API operations.

Every phase runs through the same MCP connection. One protocol, all platforms, full governance. No switching between tools.

Ideate
Launch
Measure
Analyze
Report
Iterate

One conversation. All six phases. Every platform.

The daily grind

Common problems. Direct answers.

Challenge 1

Flag Sprawl: Nobody Knows What's Safe to Remove

The problem

Hundreds of old flags accumulate across projects. No one knows which are still evaluated in production code, which are safe to archive, and which are secretly load-bearing. The audit takes days and still misses things.

How MCP solves it

Improvado extracts evaluation data alongside flag metadata. Ask your AI agent to identify flags with zero evaluations in the last 30 days across all environments. It cross-references creation date, last modification, and current targeting rules — producing a prioritized cleanup list in seconds.

Try asking
Which flags had zero evaluations in production in the last 30 days but are still active? Group by project.
Answer in seconds
All data sources, one query
Challenge 2

Targeting Rules Diverge Across Environments

The problem

A flag has slightly different targeting rules in staging vs. production. No one noticed when it was changed three sprints ago. Now a release is failing QA because behavior doesn't match what was tested. Tracking down the diff means manually comparing rule sets.

How MCP solves it

The MCP server makes environment comparison instant. Your AI agent pulls targeting rules from both environments and surfaces every discrepancy — which user segments differ, which percentage rollouts are misaligned, and which rule orderings changed.

Try asking
Compare the targeting rules for payment-v2 flag between staging and production. What's different?
Full detail preserved
No data loss on export
Challenge 3

Experiment Results Are Buried and Undecided

The problem

Experiments run for weeks, results accumulate, and then nobody acts. The data is in LaunchDarkly but interpreting it means exporting CSVs, running stats manually, and writing up a summary that might get ignored anyway. Flags linger in experiment state indefinitely.

How MCP solves it

Ask your AI agent to summarize experiment results in plain language. It pulls conversion rates, confidence intervals, and sample sizes — then tells you whether the results are statistically significant and what action to take next.

Try asking
Summarize the results for all running experiments. Which ones have reached statistical significance and what should we do?
Unified data model
Compare anything side by side
👥 Teams

One Framework. Five Roles. Zero Setup.

Same MCP connection, different workflows for every team member. Each role asks in natural language — the MCP server handles the complexity (rate limits, auth, schema normalization, governance) behind the scenes.

Agency CEO
Portfolio health. Client risk. Revenue signals.
Media Strategist
70% strategy, not 70% ops. Auto campaign QA.
Marketing Analyst
Zero wrangling. Cross-platform. AI narratives.
Account Manager
QBR decks auto-generated. Call prep in 30s.
Creative Director
Performance-to-brief. Predict winners before spend.
FAQ

Common questions

What LaunchDarkly data can AI agents access through this MCP?

Feature flags (all configurations, targeting rules, rollout percentages, prerequisite flags), environments, projects, experiments and their results, segments, audit log entries, and flag evaluation metrics. Basically everything accessible through LaunchDarkly's REST API, queryable in natural language.

Can AI agents modify flags in production, or only read?

Both read and write operations are supported. You control the permission scope through LaunchDarkly's API token settings. Many teams configure read-only access for AI agents in production and full write access in staging environments. Write operations in production always return a confirmation step before executing.

Does this work across multiple LaunchDarkly projects and environments?

Yes. The MCP server connects to all projects and environments your API token has access to. Cross-project and cross-environment queries work in a single prompt — for example, comparing flag states across dev, staging, and production simultaneously — all through Improvado's hosted MCP server.

How does this help with flag cleanup and technical debt?

The most common use case is identifying stale flags. Your AI agent can query evaluation counts, creation dates, and last modification timestamps across all flags, then produce a prioritized list of candidates for archiving or removal. What used to take a manual audit taking days happens in under a minute.

Is my LaunchDarkly API token stored securely?

Yes. Improvado stores all API credentials in an encrypted vault — SOC 2 Type II certified. Your AI agent sends queries through Improvado's secure proxy. Raw API keys never appear in conversation logs or prompt histories.

How long does setup take?

Under 5 minutes. Generate a LaunchDarkly API token, connect it to Improvado, then add one line to your Claude Desktop or Cursor MCP config. If you already use Improvado, your credentials may already be connected — check the integrations panel at app.improvado.io.

Stop Reporting. Start Executing.

Connect your data to an AI agent in under 60 seconds. The closed loop starts with one conversation.

SOC 2 Type II GDPR 500+ Platforms