PyPI
 · MCP Server

PyPI MCP — Package Intelligence for Engineering Teams

Improvado's MCP server connects PyPI package data to your AI agent. Query download trends, dependency risks, version histories, and security advisory information — without manually checking PyPI pages or writing custom scripts. Works with Claude, Cursor, and any MCP-compatible tool.

46K+ metrics · Read & Write access · 500+ platforms · <60s setup
📈 Read

Read: Analyze Package Data Without Manual Lookups

Stop manually checking PyPI pages and release histories. Ask your AI agent about package download trends, dependency counts, release cadence, or which packages in your requirements have known vulnerabilities.

Your AI agent reads harmonized data across 500+ platforms. "Cost" in Google Ads and "spend" in Meta Ads resolve to the same field automatically.

Example prompts
"Show anomalies across all accounts" 2h → 40s
"CPL in New York vs. California?" 1h → 30s
"ROAS by campaign type, last 30 days" 45m → 15s
Works with Claude ChatGPT Cursor +5
Write actions
"Launch A/B test, $5K budget" 5 days → 20m
"Shift 20% of Display to PMax" 2h → 1m
"Pause all ad groups with CPA > $50" 30m → 10s
🛡 Every action logged · Fully reversible · SOC 2 certified
🚀 Write

Write: Manage Package Metadata and Release Information via AI

Update package descriptions, manage release notes, and track version pinning decisions directly through your AI agent. The package management documentation tasks that pile up during release cycles.

250+ governance rules enforce naming conventions, budget limits, and KPI thresholds. SOC 2 Type II certified.

⚠️ Monitor

Monitor: Track Package Health and Security Across Your Stack

Set AI-powered watches on new CVEs affecting your dependencies, unexpected download spikes on your published packages, and dependency staleness. Get notified before security issues reach production.

Automated weekly reports, anomaly flagging, and budget alerts — all from a single conversation. No more morning check-ins across 5 dashboards.

Monitor prompts
"Flag ad groups over 120% budget" 3h → 1m
"Weekly report: spend, CPA, anomalies" 3h → auto
"Which creatives are fatiguing?" 2h → 30s
Alerts sent to Slack, email, or your AI agent
💡
Ideate
🚀
Launch
📈
Measure
🔍
Analyze
📝
Report
🔄
Iterate
One conversation. All six phases. Every platform.
🔄 Full Cycle

The Closed Loop: Read → Decide → Write → Monitor

Update package descriptions, manage release notes, and track version pinning decisions directly through your AI agent. The package management documentation tasks that pile up during release cycles.

Every phase runs through the same MCP connection. One protocol, all platforms, full governance. No switching between tools.

Challenge 1

Dependency Audits Require Chaining Multiple Tools

THE PROBLEM

You want to audit your Python dependencies for security issues. pip-audit gives you CVEs. Safety CLI gives you different results. PyPI's JSON API has release metadata. None of these talk to each other. Building a comprehensive picture of your dependency risk means running three tools, exporting their output, and manually reconciling differences.

HOW MCP SOLVES IT

Your AI agent queries PyPI package data, release histories, and security advisory databases through one MCP connection. Ask for a combined view of your requirements — versions, known CVEs, last release date, and download health — in a single query.

Try asking
"Show ROAS across all 120 accounts"
Answer in seconds
All data sources, one query
Try asking
"What's my CPL in New York vs. California?"
🔍
Full detail preserved
No data loss on export
Challenge 2

Transitive Dependency Risks Are Invisible Until They Break

THE PROBLEM

You pin your direct dependencies carefully. But your dependencies have dependencies, and those have vulnerabilities. The package you trust may pull in a vulnerable transitive dependency that isn't visible in your requirements file. Most teams only discover this when a security scanner runs in CI — or worse, after an incident.

HOW MCP SOLVES IT

Ask your AI agent to trace transitive dependencies for your key packages and surface which ones introduce CVEs your direct requirements file doesn't show. Understand the full dependency graph risk, not just the top layer.

Challenge 3

Abandoned Package Decisions Are Never Revisited

THE PROBLEM

Three years ago, someone chose a library that seemed well-maintained. It's now used in 12 services. The maintainer archived the repo 18 months ago. PyPI still serves it. There's no automated process to flag that a package you're depending on has gone unmaintained — you only find out when a security issue has no fix.

HOW MCP SOLVES IT

Your AI agent audits PyPI release history and maintenance signals across your full dependency list. It surfaces packages with no releases in 12+ months, archived source repos, or rapidly declining downloads — the early signals of abandonment.

Try asking
"PMax vs. Search ROAS for Q1?"
⚖️
Unified data model
Compare anything side by side
Agency CEO
Portfolio health. Client risk. Revenue signals.
Media Strategist
70% strategy, not 70% ops. Auto campaign QA.
Marketing Analyst
Zero wrangling. Cross-platform. AI narratives.
Account Manager
QBR decks auto-generated. Call prep in 30s.
Creative Director
Performance-to-brief. Predict winners before spend.
👥 Teams

One Framework. Five Roles. Zero Setup.

Same MCP connection, different workflows for every team member. Agency CEOs get portfolio health. Media Strategists get campaign QA. Analysts get cross-platform reports. Account Managers get auto-generated QBR decks. Creative Directors get performance-based briefs.

Each role asks in natural language. The MCP server handles the complexity — rate limits, auth, schema normalization, governance — behind the scenes.

Frequently Asked Questions

What PyPI data can I query through the MCP server?
+

Package metadata (description, classifiers, maintainers, license), release history and version details, download statistics, dependency declarations, and security advisory information from PyPI's JSON API and associated vulnerability databases.

Can I query download stats for packages I publish as well as packages I depend on?
+

Yes. You can query public download statistics for any package on PyPI — whether you publish it or depend on it. This lets you track adoption of your own packages and monitor download health of your critical dependencies in the same conversation.

Does this include security vulnerability data or just package metadata?
+

Both. The MCP server queries PyPI's package API for metadata and release data, and also pulls from PyPI's own advisory database (which feeds from OSV). You can ask about known CVEs for specific packages or scan your entire requirements list for vulnerabilities in one query.

How is this useful for teams that don't publish packages to PyPI publicly?
+

Even if you only consume packages without publishing, the MCP server is valuable for dependency auditing, security scanning, and staleness detection across your full Python stack. Teams using private PyPI mirrors or artifact repositories can also connect those through Improvado's integration layer.

Can I track download trends for multiple packages across different Python versions through Improvado MCP?
+

Yes. Improvado MCP extracts PyPI download statistics and makes them queryable by package name, version, Python version, and installer type. AI agents can identify which package versions are most widely used, detect adoption spikes after a release, or compare download velocity across a portfolio of maintained packages — all in plain language queries.

How does querying PyPI data through an AI agent differ from using the BigQuery public dataset directly?
+

The PyPI BigQuery public dataset requires SQL knowledge, BigQuery access, and manual joins to combine with other data sources. Improvado MCP makes the same underlying data queryable through natural language, and allows AI agents to cross-reference PyPI download trends with GitHub star history, issue counts, or internal dependency tracking data without writing any SQL. It is especially useful for open-source program managers who want quick answers without a data engineering workflow.

What PyPI data can I query through the MCP server?
Package metadata (description, classifiers, maintainers, license), release history and version details, download statistics, dependency declarations, and security advisory information from PyPI's JSON API and associated vulnerability databases.
Can I query download stats for packages I publish as well as packages I depend on?
Yes. You can query public download statistics for any package on PyPI — whether you publish it or depend on it. This lets you track adoption of your own packages and monitor download health of your critical dependencies in the same conversation.
Does this include security vulnerability data or just package metadata?
Both. The MCP server queries PyPI's package API for metadata and release data, and also pulls from PyPI's own advisory database (which feeds from OSV). You can ask about known CVEs for specific packages or scan your entire requirements list for vulnerabilities in one query.
How is this useful for teams that don't publish packages to PyPI publicly?
Even if you only consume packages without publishing, the MCP server is valuable for dependency auditing, security scanning, and staleness detection across your full Python stack. Teams using private PyPI mirrors or artifact repositories can also connect those through Improvado's integration layer.
Can I track download trends for multiple packages across different Python versions through Improvado MCP?
Yes. Improvado MCP extracts PyPI download statistics and makes them queryable by package name, version, Python version, and installer type. AI agents can identify which package versions are most widely used, detect adoption spikes after a release, or compare download velocity across a portfolio of maintained packages — all in plain language queries.
How does querying PyPI data through an AI agent differ from using the BigQuery public dataset directly?
The PyPI BigQuery public dataset requires SQL knowledge, BigQuery access, and manual joins to combine with other data sources. Improvado MCP makes the same underlying data queryable through natural language, and allows AI agents to cross-reference PyPI download trends with GitHub star history, issue counts, or internal dependency tracking data without writing any SQL. It is especially useful for open-source program managers who want quick answers without a data engineering workflow.

Stop Reporting. Start Executing.

Connect your data to an AI agent in under 60 seconds. The closed loop starts with one conversation.

SOC 2 Type II
GDPR
500+ Platforms
46K+ Metrics