Marketing analysts today face a paradox. You have more campaign data than ever before — ad platforms, CRMs, email tools, analytics suites — yet getting a clear answer to "Are we on track?" can take hours of manual work. Platforms change their APIs without warning. Attribution windows don't match across channels. One team calls it 'leads,' another calls it 'conversions,' and finance wants to know about pipeline contribution.
Without a unified monitoring system, you're reacting to problems after they've already burned budget. A LinkedIn campaign underperforms for three days before anyone notices. Display ads drive clicks but zero revenue, and no one connects the dots until the monthly review. Social spend spikes 40% because an auto-bidding rule misfired, and the alert comes from the finance team, not the marketing platform.
This guide breaks down how to monitor marketing campaigns the way high-performing teams do: automated data collection, real-time dashboards that show truth, and alerts that catch issues before they compound. You'll learn the exact steps to build a monitoring workflow that scales across channels, maintains data quality, and keeps every stakeholder aligned on what matters.
Key Takeaways
✓ Campaign monitoring requires connecting all data sources into a single system where metrics are standardized and comparable across channels.
✓ Real-time tracking prevents budget waste by surfacing performance drops, pacing issues, and anomalies while you still have time to adjust.
✓ Automated data validation catches errors at ingestion — before bad data reaches your dashboard and triggers wrong decisions.
✓ Effective monitoring pairs quantitative dashboards with qualitative context: what changed, why it matters, and what action to take.
✓ The best monitoring systems adapt to your workflow, not the other way around — alerts go to the right people, dashboards answer specific questions, and reports require zero manual assembly.
What Is Campaign Monitoring and Why It Matters
Campaign monitoring is the continuous process of collecting, validating, and analyzing performance data from every marketing channel your team runs. It answers three core questions: Are we spending what we planned? Are we getting the results we expected? Where should we intervene right now?
The practice evolved from weekly spreadsheet reviews to real-time dashboards because the cost of waiting increased. In 2026, total U.S. ad spend is projected to grow 9.5% year-over-year, with social media ad spend up 14.6% and connected TV up 13.8%. Budgets are larger, channel counts are higher, and the window for corrective action is shorter. A campaign that burns $10K over the weekend before anyone checks Monday morning is no longer an edge case — it's the default failure mode when monitoring is manual.
For marketing analysts, monitoring is the infrastructure that makes every other part of the job possible. You can't optimize what you can't see. You can't report accurately on data you don't trust. And you can't scale a team's output if every performance question requires three hours of data assembly before you even start the analysis.
Step 1: Define What You Need to Monitor
Before connecting data sources or building dashboards, map out exactly what you need to track and why. Start with business outcomes, then work backward to the metrics and dimensions that predict or explain those outcomes.
Identify Primary KPIs by Channel
Each channel has its own ecosystem of metrics. Google Ads reports clicks, impressions, conversions, and Quality Score. Meta tracks reach, frequency, CPM, and relevance score. LinkedIn measures engagement rate and lead form completions. Your monitoring system needs to track the metrics that matter for each channel's role in your funnel.
For most B2B teams, primary KPIs include:
✓ Awareness channels (display, social, video): Impressions, reach, CPM, view-through rate
✓ Consideration channels (paid search, content syndication): Clicks, CTR, CPC, time on site, pages per session
✓ Conversion channels (retargeting, email, direct): Conversion rate, CPA, ROAS, pipeline contribution
But KPIs alone don't tell the full story. You also need dimensions — the attributes that let you slice performance and find patterns.
Map Critical Dimensions
Dimensions are the 'group by' fields that turn aggregate numbers into insights. Campaign name, ad group, audience segment, device type, geographic region, and time period are the most common. If you can't break down performance by these dimensions, you can't diagnose what's working or allocate budget intelligently.
The challenge: every platform names and structures dimensions differently. Google Ads uses 'Campaign,' Meta uses 'Campaign Name,' LinkedIn uses 'Campaign,' but they're not guaranteed to match your internal naming taxonomy. Monitoring systems must normalize these into a single schema or you'll spend every analysis session reconciling field names instead of analyzing data.
Set Monitoring Frequency
Not every metric needs real-time tracking. Decide monitoring cadence based on spend velocity and decision speed:
• Real-time (refresh every 15–60 minutes): High-spend campaigns, launch periods, promotional events, any channel where budget can drain fast
• Daily (refresh every morning): Standard always-on campaigns, brand awareness channels, content performance
• Weekly or monthly: Attribution models, customer lifetime value, long-cycle pipeline metrics
If your monitoring system refreshes too slowly, problems compound before you see them. If it refreshes too often, you'll trigger false alarms on normal variance and train your team to ignore alerts.
Step 2: Connect All Data Sources
You can't monitor what you can't access. The first technical step is establishing automated connections to every platform where campaign data lives: ad networks, analytics tools, CRMs, email platforms, and any custom systems your team built.
Use Native API Connections
Most marketing platforms provide APIs that allow programmatic data extraction. Google Ads has the Google Ads API, Meta offers the Marketing API, LinkedIn has its own API for campaign reporting. These APIs return structured data you can store in a database or warehouse.
The problem: APIs break. Platforms deprecate endpoints, change authentication methods, rename fields, or introduce rate limits that weren't there last month. If you're building and maintaining these connections in-house, plan for 20–30% of your engineering team's time to go toward API maintenance, not new features.
Standardize Data Schemas
Once data flows from each source, it needs to land in a consistent structure. Google Ads might send 'clicks,' Meta sends 'link_clicks,' and LinkedIn sends 'clicks' but they don't always mean the same thing. Your monitoring system must map these into a unified schema where 'clicks' means one thing across all channels.
This is where most homegrown systems fail. Teams build one-off scripts for each platform, and six months later no one remembers whether the LinkedIn connector counts sponsored InMail clicks the same way the Google connector counts search ad clicks. Data looks integrated, but it's not comparable.
Handle Historical Data
Monitoring isn't just about today's performance — you need historical context to know if today is normal or an anomaly. When you first connect a data source, pull at least 12 months of history so you can compare current performance to prior periods, identify seasonality, and establish baseline expectations.
Be aware: some platforms limit how far back you can pull data via API. Google Ads typically allows 2+ years, but smaller platforms may cap historical pulls at 90 days. If you wait to set up monitoring until after a campaign launches, you may lose the ability to compare to pre-launch baselines.
Step 3: Automate Data Validation
Data that flows into your monitoring system isn't automatically trustworthy. API responses can be incomplete, platforms sometimes return zeros instead of nulls, currency conversions fail, and UTM parameters get malformed. If bad data reaches your dashboard, every decision made from that dashboard is suspect.
Build Ingestion-Time Checks
Validate data as it arrives, not after it's already in your reporting layer. Common checks include:
• Completeness: Did the API return all expected fields? Are there gaps in the date range?
• Type validation: Are numeric fields actually numbers? Are date fields formatted correctly?
• Range validation: Is CPC within expected bounds? Are impression counts reasonable given spend?
• Referential integrity: Does every campaign ID in the spend table exist in the campaign metadata table?
If validation fails, the pipeline should halt and alert a human before writing bad data to the warehouse. Fixing corrupt data after it's mixed with clean data is exponentially harder than catching it at ingestion.
Monitor for Schema Changes
Platforms add, rename, and deprecate fields constantly. If your ingestion pipeline expects a field called 'campaign_id' and the platform renames it to 'campaignId,' your pipeline breaks silently — data stops flowing, but you might not notice until someone asks why yesterday's numbers are missing.
Automated schema monitoring compares incoming API responses to the expected structure and flags discrepancies before they cause downstream failures. When handled correctly, these changes trigger a controlled update process instead of a data outage.
Preserve Historical Accuracy
Some platforms retroactively adjust historical data when attribution windows close or conversions get updated. If you overwrite historical records every time you pull fresh data, your month-over-month comparisons become meaningless because last month's numbers keep changing.
Best practice: store raw data immutably with a timestamp of when it was pulled, then build reporting views that use the most recent version. This way you can always reconstruct what the dashboard showed on any given day, which is critical when reconciling reports with finance or explaining why last week's projections didn't match this week's actuals.
Step 4: Build Performance Dashboards
Dashboards are the interface between raw data and human decision-making. A good dashboard answers specific questions instantly. A bad dashboard is a pile of charts that forces viewers to do their own analysis every time they log in.
Design for Questions, Not Metrics
Before adding a chart, write down the question it answers. 'What's our overall ROAS this month?' is a question. 'Here's a line chart of daily spend' is not — it's a metric waiting for someone to interpret it.
Effective monitoring dashboards are organized by question hierarchy:
• Top section: Are we on track overall? (Total spend vs. budget, total conversions vs. goal, current pacing toward month-end targets)
• Middle section: Which channels are performing? (ROAS by channel, CPA by campaign, contribution to pipeline by source)
• Bottom section: Where are the anomalies? (Biggest movers vs. last week, campaigns exceeding CPA threshold, budget exhaustion warnings)
This structure lets users triage in seconds: green at the top means keep scrolling, red at the top means stop and investigate.
Use Benchmarks and Context
A number without context is useless. 'CPA is $87' tells you nothing unless you know the goal was $75 or that last month it was $62. Every key metric should appear alongside:
• Goal or target (set during planning)
• Prior period comparison (last week, last month, last year)
• Variance indicator (% change, absolute difference, color-coded status)
Analysts often skip this step because it requires storing historical snapshots and goals in the same database as live metrics. But without it, dashboards become trivia instead of tools.
Enable Drill-Down
High-level dashboards show you that there's a problem. Drill-down views show you where. If overall ROAS dropped 15%, you need to be able to click through to see which campaigns, ad groups, or audience segments drove the decline.
This requires dimensional data at every level: campaign, ad group, creative, audience, device, geography. If your data model doesn't support drill-down, every investigation turns into a custom SQL query instead of a two-click exploration.
Step 5: Set Up Automated Alerts
Dashboards require someone to look at them. Alerts push information to the people who need it, exactly when they need it. The goal is to catch issues faster than manual checks allow, without flooding inboxes with false positives.
Define Alert Triggers
Alerts should fire on conditions that require action, not just any deviation from normal. Common triggers include:
• Budget pacing: Campaign will exhaust budget before month-end, or spend is tracking 20%+ below plan
• Performance thresholds: CPA exceeds target by 30%, ROAS drops below minimum acceptable level
• Anomalies: Spend spikes 50%+ vs. 7-day average, conversion rate drops to zero, CTR doubles overnight
• Data quality: API connection fails, no data received for 6+ hours, schema validation errors
Each alert type needs its own threshold and escalation path. A 10% CPA increase might be noise; a 50% increase is a signal. Tune thresholds based on historical volatility — channels with naturally high variance need wider bands to avoid alert fatigue.
Route Alerts to the Right People
Not every alert needs to go to everyone. Budget exhaustion warnings should go to campaign managers. Data pipeline failures should go to your analytics engineer. Executive stakeholders only need alerts when key business metrics miss targets, not when individual ad groups underperform.
Alert routing requires role-based logic: if X condition applies to Y campaign owned by Z person, send alert to Z. Most BI tools don't support this natively, which is why teams end up with one-size-fits-all alerts that everyone ignores.
Include Context in Alerts
An alert that says 'Campaign X exceeded CPA threshold' forces the recipient to log in, find the campaign, check the numbers, and figure out what changed. An alert that says 'Campaign X CPA is $142 (target: $90) — spend up 60% vs. last week, conversion rate unchanged — likely cause: bid adjustment on 4/12' gives the recipient everything they need to decide on next steps without opening a dashboard.
This level of context requires the alerting system to have access to historical data, goal definitions, and event logs (so it knows a bid adjustment happened on 4/12). It's more work to set up, but it's the difference between alerts that drive action and alerts that drive frustration.
Step 6: Implement Cross-Channel Attribution
Single-channel metrics tell you how each platform reports its own performance. Attribution tells you how channels work together to drive outcomes. Without it, you'll over-invest in last-click channels and starve the awareness and consideration tactics that make those last clicks possible.
Choose an Attribution Model
Attribution models assign credit for conversions across the customer journey. Common models include:
• Last-click: 100% credit to the final touchpoint before conversion
• First-click: 100% credit to the first touchpoint
• Linear: Equal credit to all touchpoints
• Time-decay: More credit to recent touchpoints
• Data-driven: Credit based on statistical analysis of actual conversion paths
No model is perfect. Last-click undervalues awareness. First-click ignores everything that happened after initial discovery. Linear treats a single display impression the same as a high-intent search click. Data-driven models require large conversion volumes to be statistically valid.
Most teams use multiple models in parallel: last-click for platform-level optimization, time-decay or data-driven for budget allocation decisions, and first-click to ensure awareness channels get credit.
Track Touchpoints Across Channels
Attribution only works if you can connect touchpoints to individual users. This requires:
• Consistent UTM parameters: Every campaign URL must include source, medium, campaign, and ideally content/term parameters
• User identification: Cookie-based tracking, logged-in user IDs, or probabilistic matching when cookies aren't available
• Conversion event tracking: Standardized tracking of form fills, purchases, demo requests, and other conversion actions
If UTM parameters are inconsistent — sometimes 'google' and sometimes 'Google,' sometimes 'cpc' and sometimes 'paid-search' — your attribution model will split credit across what it thinks are different channels, even though they're the same.
Reconcile Attribution with Platform Reporting
Your attribution model will never match platform-reported conversions perfectly. Google Ads uses a 30-day click window and attributes conversions to the ad click. Your analytics tool might use a 7-day window and attribute to the landing page session. Both are 'correct' for their own purposes, but you need to explain the difference when stakeholders ask why the numbers don't match.
Best practice: report both views side by side. Show platform-reported conversions for optimization (because that's what the bidding algorithm optimizes toward) and attributed conversions for budget allocation (because that reflects cross-channel reality). Document the methodology so future analysts don't have to reverse-engineer it.
Step 7: Create Monitoring Workflows
Dashboards and alerts are infrastructure. Workflows are how your team actually uses that infrastructure to make decisions and take action. Without defined workflows, monitoring becomes surveillance — you watch performance, but you don't intervene systematically.
Daily Check-In Routine
High-performing teams start each day with a structured monitoring check:
• Review overnight alerts — were there any anomalies, budget issues, or data quality problems?
• Check pacing dashboard — are active campaigns on track to hit monthly goals?
• Scan performance scorecards — which campaigns or channels moved significantly vs. yesterday?
• Triage issues — tag items for immediate action, deeper analysis, or continued monitoring
This takes 10–15 minutes and prevents small problems from becoming expensive ones. The key is consistency: the check happens every day, even when you don't expect problems, so you catch the unexpected.
Weekly Performance Review
Once a week, step back from daily firefighting and assess the bigger picture:
• Are we on track to hit monthly and quarterly goals?
• Which channels or campaigns should we scale, pause, or adjust?
• What experiments or optimizations should we run this week?
• Are there any patterns in the data that suggest strategic shifts?
This is where monitoring feeds into planning. If paid search consistently outperforms social on cost per opportunity, the weekly review is when you decide to shift budget. If display awareness campaigns correlate with search conversion lift, the weekly review is when you design a test to prove causation.
Incident Response Playbook
When alerts fire or dashboards show critical issues, your team needs a shared playbook for response:
• Acknowledge: Who owns the issue? Set an expected resolution time.
• Diagnose: Is this a data issue, a platform issue, or a true performance problem?
• Contain: Pause spending, revert changes, or take other immediate action to stop the bleeding.
• Resolve: Fix the root cause — adjust bids, update creative, repair data pipelines.
• Document: Record what happened, why, and how it was fixed, so the next person doesn't start from zero.
Without this structure, incidents get handled ad hoc — different people respond differently, resolution takes longer, and the same issues recur because no one documented the fix.
Common Mistakes to Avoid
Even well-intentioned monitoring systems fail when teams make these predictable errors:
Monitoring Too Many Metrics
It's tempting to track every available field from every platform. Impressions, clicks, CTR, CPC, conversions, conversion rate, ROAS, engagement rate, video completion rate, bounce rate, time on site — the list goes on. But cognitive load is real. If your dashboard shows 40 metrics, no one knows which five actually matter.
Focus on the vital few: the 3–5 metrics that directly predict business outcomes. Everything else is context or diagnostic detail, not primary monitoring.
Ignoring Data Latency
Not all platforms report data at the same speed. Google Ads updates throughout the day. Facebook can lag by several hours. Salesforce opportunity data might sync once per day. If you build a dashboard that mixes real-time and delayed data without labeling latency, users will make decisions based on incomplete information.
Always display the last refresh timestamp for each data source. If a dashboard shows 'as of 2 hours ago,' users know they're not seeing the absolute latest.
Setting Static Thresholds
Alerting on 'CPA exceeds $100' works until seasonality kicks in. In December, your CPA might naturally run higher due to competition. A static threshold fires false alerts all month. A dynamic threshold based on historical patterns for the same time period fires only when performance is abnormal for the context.
Use rolling averages, seasonal baselines, and percentile-based thresholds instead of fixed numbers whenever possible.
Building Dashboards for Everyone
A dashboard designed for executives, campaign managers, and analysts at the same time serves none of them well. Executives need high-level KPIs and trends. Campaign managers need drill-down access to ad group and creative performance. Analysts need raw data access and flexible filtering.
Build role-specific views. Let each audience see what they need without wading through what they don't.
Neglecting Documentation
Six months from now, you won't remember why you chose a 7-day attribution window instead of 30-day, or why the 'Brand' campaign filter excludes certain keywords, or what 'Adjusted ROAS' means versus regular ROAS. Future team members definitely won't know.
Maintain a data dictionary that defines every metric, every filter, and every calculation. Store it alongside the dashboards, not in a forgotten Wiki page.
Tools That Help Monitor Marketing Campaigns
Building a monitoring system requires integration, storage, transformation, visualization, and alerting layers. You can assemble this from separate tools or use a platform that combines them.
End-to-End Marketing Analytics Platforms
Improvado provides the full stack for campaign monitoring: 500+ pre-built connectors to ad platforms and marketing tools, automated data normalization into a standardized schema, built-in data validation with 250+ governance rules, and compatibility with any BI tool or data warehouse. The platform handles API maintenance, schema changes, and historical data preservation automatically, so analysts spend time analyzing instead of maintaining pipelines.
Improvado includes features purpose-built for monitoring: pre-launch budget validation, real-time spend pacing alerts, and automatic anomaly detection across all connected channels. It's ideal for teams running campaigns across 10+ channels or managing $500K+ in monthly ad spend. Not ideal for small teams with simple monitoring needs or single-channel marketers.
Supermetrics focuses on data extraction and loading into Google Sheets, Excel, or BI tools. It supports 100+ connectors and works well for teams already comfortable building their own transformation and validation logic. Pricing starts low, but costs scale quickly with data volume and connector count.
Funnel.io offers data integration and marketing-specific transformation. It's strong on multi-currency normalization and cost aggregation. Less strong on real-time monitoring and alerting compared to platforms purpose-built for operations.
Business Intelligence Tools
Once data is centralized, you need a visualization layer. Tableau, Looker, and Power BI are the most common. Each has strengths: Tableau for complex visualizations, Looker for SQL-based modeling, Power BI for Microsoft ecosystem integration. All require clean, well-structured data as input — they visualize what you give them but won't fix data quality issues upstream.
Monitoring and Alerting Tools
Monte Carlo and Datafold provide data observability — they monitor pipelines for schema changes, data quality issues, and anomalies. They're overkill for small teams but essential once you're managing dozens of data sources and multiple downstream consumers of that data.
PagerDuty and Opsgenie handle alert routing and escalation. If your monitoring system detects a critical issue at 2 AM, these tools ensure the right person gets paged and that alerts don't get lost in email.
How to Scale Monitoring as Campaigns Grow
What works for 5 campaigns and 3 channels breaks when you scale to 50 campaigns across 10 channels. Monitoring systems must scale in three dimensions: data volume, organizational complexity, and decision speed.
Automate Everything Repeatable
Any task you do more than once a week should be automated. Pulling data, standardizing field names, calculating derived metrics, generating reports, checking for anomalies — these should run on schedules without human intervention. If your monitoring workflow requires manual steps, you've capped your scale at the number of hours your team can spend on those steps.
Centralize Configuration Management
As campaigns multiply, you need a single source of truth for goals, budgets, alert thresholds, and attribution rules. Store this in a database or configuration management system, not in individual dashboard filters or hardcoded scripts. When a campaign budget changes, you should update it once and have that change propagate everywhere it's referenced.
Build Self-Service Analytics
At small scale, analysts can field every ad hoc request. At large scale, you need to enable stakeholders to answer their own questions. This requires well-documented dashboards, intuitive filtering, and governed data models where users can't accidentally create nonsense metrics.
Self-service doesn't mean 'give everyone database access.' It means building tools where non-technical users can explore data within guardrails that prevent misuse.
Conclusion
Monitoring marketing campaigns isn't about watching numbers go up and down. It's about building a system that catches problems early, surfaces opportunities fast, and keeps every stakeholder aligned on truth. The teams that do this well automate data collection, standardize metrics across channels, validate quality at ingestion, and design dashboards that answer specific questions instead of displaying generic charts.
Start with the fundamentals: connect all your data sources, establish a single schema, and build one dashboard that answers your most important question. Then expand from there — add attribution, refine alerts, create role-specific views. The goal isn't perfection on day one. The goal is a system that improves every week and scales as your campaigns grow.
When monitoring works, you stop reacting to last week's problems and start preventing next week's. You reallocate budget based on evidence, not intuition. You catch the bid adjustment that would have burned $10K before it burns $500. And you spend less time explaining discrepancies and more time finding the next opportunity.
Frequently Asked Questions
How often should I check campaign performance?
Check high-spend campaigns daily, standard campaigns weekly. But rely on automated alerts for real-time issues — you shouldn't need to manually check dashboards to catch budget overruns or performance drops. The right cadence depends on spend velocity: if a campaign can burn its monthly budget in three days, you need real-time monitoring. If monthly spend is stable and predictable, weekly checks are sufficient. Most teams land on a daily morning routine for active campaigns and weekly deep-dives for strategic planning.
What metrics matter most for campaign monitoring?
Focus on metrics tied directly to business outcomes: cost per acquisition, return on ad spend, and contribution to pipeline or revenue. These connect marketing activity to results executives care about. Secondary metrics like CTR, conversion rate, and engagement rate help diagnose why primary metrics move, but they shouldn't be your primary monitoring focus. If you can only track three things, track spend vs. budget, cost per goal completion, and attribution to revenue.
How do I handle attribution when platforms report different conversion numbers?
Expect discrepancies — platforms use different attribution windows, tracking methods, and definitions. Google Ads might report 100 conversions with a 30-day click window while your analytics tool reports 75 with a 7-day any-click window. Both are correct for their purposes. Use platform-reported numbers for in-platform optimization (because that's what bidding algorithms see) and use a consistent cross-platform attribution model for budget allocation decisions. Document your methodology and educate stakeholders on why the numbers differ. The goal isn't perfect alignment — it's consistent decision-making.
Should I build campaign monitoring infrastructure in-house or use a platform?
Build in-house if you have dedicated engineering resources, simple monitoring needs (fewer than five data sources), and very specific requirements no platform addresses. Use a platform if you're managing 10+ data sources, need monitoring operational quickly, or don't want to spend engineering time on API maintenance. The hidden cost of in-house builds is ongoing maintenance — platforms deprecate APIs, change authentication, and add rate limits constantly. Most teams underestimate this by 3–5x when making the build-versus-buy decision.
What causes most data quality issues in campaign monitoring?
Inconsistent UTM parameters, platform API changes, and timezone mismatches cause the majority of problems. Someone tags a campaign 'google_cpc' instead of 'google-cpc' and your reporting splits what should be one channel into two. Google updates their API and renames a field, breaking your ingestion script. One platform reports in UTC, another in Pacific time, and your dashboard shows yesterday's spend aggregated with today's. Prevent these with strict UTM governance, automated schema monitoring, and timezone normalization at ingestion. The best monitoring systems catch these issues before they reach dashboards.
How do I monitor campaigns running across different geographic regions and time zones?
Standardize all timestamps to one timezone (usually UTC) at data ingestion, then convert to local time zones only in the presentation layer. This prevents double-counting and misalignment when aggregating data. For reporting, create region-specific dashboard views that show performance in local business hours — a 'daily performance' dashboard for EMEA teams should show EMEA business days, not Pacific time. Alert thresholds may need regional adjustment too, since normal variance differs by market maturity and competitive intensity.
What should I do when a monitoring alert fires?
First, verify it's not a data issue — check that the platform is reporting correctly and your pipeline ingested data properly. If the alert is real, assess severity: is this costing money right now, or is it a trend to watch? For critical issues (budget exhausted early, CPA spiked 100%+, conversions dropped to zero), pause spending and diagnose the root cause before resuming. For trend alerts (performance declining but not yet critical), investigate and set a follow-up check. Always document what you found and what action you took — this builds institutional knowledge and trains your alerting system over time.
.png)



.png)
