TikTok Ads Data Is a Moving Target: 5 Extraction Challenges That Won't Stay Fixed

Last updated on

5 min read

Based on Improvado customer data: Built from analysis of 200+ enterprise brands running Improvado's agentic data pipelines across TikTok ad accounts.

TL;DR for enterprise analytics teams: TikTok ships Marketing API changes every few weeks — field names mutate, endpoints deprecate on 60-day clocks, and rate limits shift without a changelog entry. Your pipeline ran green on Monday, silently returned empty payloads on Tuesday, and now your CMO is asking why TikTok spend disappeared from the weekly dashboard. Add an 18-month lookback ceiling, province IDs you have to reverse-engineer, and a 7-day click window that makes TikTok look half as efficient as Meta — and you're spending more time reconciling TikTok than buying on it.

For agencies: multiply every one of those problems by the number of clients in your Business Center. One API breaking change becomes 20 pipeline fixes. One attribution-window debate becomes 20 client QBRs. One creative-naming inconsistency becomes a taxonomy nightmare across every roll-up report. The difference between an agency that scales past 15 TikTok clients and one that plateaus at 6 is almost entirely in the data layer.

What you'll get from this guide

  • The ten TikTok data problems senior analysts at enterprise brands and performance agencies hit every quarter — with the exact failure mode, not the marketing-page version
  • What actually works — the fix pattern teams use when they can't wait for TikTok to stabilize
  • A unified playbook for running TikTok alongside Meta and Google without the apples-to-oranges comparison trap
  • An agency-specific section on Business Center fragmentation, cross-client isolation, and portfolio roll-ups
Review your TikTok extraction setup
30-minute working session with an Improvado solutions architect. Walk through your current TikTok pipeline — API version, rate-limit posture, match-rate baseline, Business Center scopes — and see what a managed agentic data pipeline would replace. No slides, no prep: we review your actual schema live.

1. TikTok's Marketing API Changes Faster Than You Can Patch

The Problem: TikTok's Marketing API evolves at a pace that makes Google Ads look like a stable RFC. New endpoints ship, old ones are deprecated on 60-day timelines, rate limits change mid-quarter, and field definitions shift between versions — sometimes before the changelog is updated. If you've built a custom integration, you've already had it break, probably more than once.

Why it happens: TikTok runs an aggressive versioning cadence (v1.2 → v1.3 in roughly nine months, with overlapping deprecation windows), enum values for fields like objective_type and placement change between minor versions, and sandbox behavior doesn't always mirror production — so integrations pass staging and fail the next morning.

For agencies: When you run TikTok for 10–30 client brands, a single API breaking change multiplies into 10–30 pipeline fixes. Agencies running raw API integrations burn billable hours on plumbing instead of strategy — and the client who noticed the gap first tends to be the one asking uncomfortable questions about data governance.

COMMON ISSUE

When TikTok's Marketing API cannot resolve an organic data request, it returns an empty payload with HTTP 200 — no error code, no retry signal. The extraction pipeline sees a successful response and moves on, leaving a silent data gap.

Improvado AI Agent analyzing TikTok Ads data pipeline health
Improvado's agentic data pipeline automatically detects TikTok API changes and silent payload failures before they hit downstream reports.

Beyond the technical churn, TikTok carries a regulatory-risk overlay that no other major ad platform carries: investing engineering cycles in a custom pipeline for a platform that could face restrictions at any time makes the ROI on in-house plumbing genuinely hard to justify.

Concrete failure modes we see most often:

  • Aggressive versioning — TikTok releases new Marketing API versions multiple times per year and deprecates old versions on 60-day timelines.
  • Undocumented field changes — enum values, nullability, and nested field structure shift between minor versions without appearing in the changelog until a contributor files a GitHub issue.
  • Sandbox vs production drift — integrations that pass sandbox QA fail in production because permission scopes, account-type enums, or attribution defaults differ.
  • Silent extraction failures — a connector shows "connected" in the UI while the underlying reports endpoint returns data: [] with HTTP 200 for days.
  • New-connection propagation delay — TikTok Ads Manager can take up to 2 hours to surface a newly connected data source, creating "why is my integration not working" tickets that resolve themselves.

The Fix: A managed connector layer with automated changelog monitoring and payload-shape validation catches empty extracts before they reach your warehouse. Improvado's agentic data pipelines maintain 1000+ pre-built connectors with a dedicated API-monitoring team — when TikTok pushes an update, the connector is updated before your next refresh, with zero code changes on your end. Teams running on a managed layer typically eliminate 10–15 hours per month of emergency pipeline firefighting.

2. Rate Limits and Throttling Break Large-Scale Pulls

The Problem: TikTok's Marketing API enforces per-minute sliding-window rate limits that bite hard when you're pulling historical data or running multi-account roll-ups. For reporting endpoints, limits sit around 600 requests per minute with a 100-items-per-page cap — and if you blow past them, you get a 40100 error that requires exponential backoff, not a simple retry.

Why it happens: TikTok rate limits are applied per access token AND per advertiser ID, stacked. Agencies pulling 20 clients from one app-token hit global caps first; brands pulling deep historical windows with daily + hourly granularity hit per-advertiser caps on the reports endpoint specifically.

For agencies: If all your client workspaces share a single app credential, you're throttling yourself. Proper rate-limit architecture means separating app tokens by client tier, implementing per-endpoint backoff, and spreading historical backfills across off-peak windows.

Failure modes:

  • Silent 40100 throttling during large historical backfills — the initial pull succeeds, then rows drop mid-range without a clear error upstream.
  • Pagination collapse on reports with >100K rows — cursor-based pagination times out, requiring filter-based chunking (by date range + advertiser_id).
  • Token-scope throttling — one misbehaving extract in a shared token starves every other extract on the same token.
  • Retry storms — naive retry loops turn a 40100 into a 5-minute outage while the backoff window extends.

The Fix: Use a data-extraction layer that implements adaptive per-endpoint backoff, credential pooling across advertiser IDs, and stateful resumable pagination — not a script with a time.sleep(60). A managed solution handles throttling at the framework level so your analysts never see a 40100 in a dashboard.

3. Match Rate and Identity Resolution Keep Underperforming

The Problem: TikTok's conversion tracking depends on matching user signals — email, phone, device ID, advanced matching parameters — to ad exposure. When match rates fall (and they do, silently), your reported conversions undercount actual results, your CPA looks inflated, and you start shifting budget away from a channel that's actually performing.

Why it happens: iOS ATT opt-in rates hover around 25–35% industry-wide, browser pixels lose 20–40% of events to cookie restrictions and ad blockers, and TikTok's advanced-matching hash normalization is stricter than Meta's — trailing whitespace, unnormalized phone formats, and case-sensitive email hashes all drop match rate.

COMMON ISSUE

When reconciling TikTok pixel, SKAN, and Marketing API numbers, three different conversion counts appear for the same campaign. TikTok publishes no crosswalk and no documentation explaining the variance across surfaces.

Failure modes:

  • Pixel-only conversion loss — browser pixel alone misses 20–40% of actual conversions; running pixel + Events API with proper dedup typically recovers 15–25% of previously invisible events.
  • Advanced-matching hash failures — emails with mixed case, phone numbers in non-E.164 format, or unnormalized addresses silently drop match rate by 10–30%.
  • Modeled-conversion opacity — TikTok fills iOS gaps with estimated conversions whose methodology is opaque; estimates can differ from server-side ground truth by 20–50%.
  • Dedup misconfiguration — running both pixel and Events API requires proper event_id matching; misconfigured dedup either double-counts or drops events silently.

The Fix: Instrument server-side conversion events via Events API as the primary source of truth, treat pixel data as a secondary signal, and run a nightly reconciliation job comparing TikTok-reported conversions against your CRM or e-commerce system. Teams using this pattern typically recover 15–30% of previously invisible conversions within the first month.

4. Geographic Data: Province IDs and the Sub-Country Reporting Trap

The Problem: TikTok's reporting API returns geography at the province/region level using numeric IDs, not ISO codes or human-readable names. Mapping TikTok province IDs to standard geographies (US states, Canadian provinces, EU regions) requires a lookup table that TikTok publishes incompletely and updates without notice.

Why it happens: TikTok built its geo taxonomy on an internal ID system designed for the product UI, not for cross-platform analytics. When you pull reports with dimensions=[region_id], you get back opaque numeric IDs like 6252001 that your dashboard has to translate to "United States — Texas" — and the mapping reference is a moving target.

For agencies: Clients expect geo reports that line up with their Meta and Google geo breakdowns. If TikTok's Montana ID silently gets split into two regions in a new API version, last month's client deck no longer reconciles against this month's — and you're the one explaining why.

Failure modes:

  • Incomplete mapping reference — TikTok docs don't publish the complete province-ID-to-name table for every country; integrators reverse-engineer it from API responses.
  • ID renaming between versions — region IDs occasionally change between API versions; historical data keyed on old IDs orphan-joins against new reference tables.
  • Mixed-granularity reports — some endpoints return country-level geo, others return region-level, others return DMA-equivalent; combining them requires custom fan-out logic.
  • No postal-code reporting — unlike Google Ads' geographic reports, TikTok does not expose zip/postal aggregates at all.

The Fix: Maintain a versioned geo-dimension table keyed on TikTok province ID, with effective_from/effective_to columns so historical reports don't break when TikTok changes the mapping. A managed data layer ships this dimension table as part of the connector; an in-house pipeline requires you to curate it manually.

5. Reporting Delays and the 48-Hour Data-Settlement Window

The Problem: TikTok reporting data is not final for 24–48 hours after the event. Video engagement metrics (average watch time, completion rate by quartile, profile visits) can take 48+ hours to fully settle. If your daily dashboard pulls at 8am local time for yesterday's data, you're looking at a partial number — and weekly roll-ups compound the issue.

Why it happens: TikTok's reporting pipeline deduplicates, fraud-filters, and enriches events asynchronously. SKAN postbacks on iOS arrive in tiered windows (0–2 days, 3–7 days, 8–35 days), so full iOS conversion attribution takes up to 35 days to finalize.

For agencies: Client reports run on fixed weekly cadences. If Monday's report pulls partial Friday-Saturday-Sunday data, every client deck has "data still settling" footnotes — or, worse, you re-pull Monday afternoon and last week's numbers quietly shift.

Failure modes:

  • Partial prior-day data — pulling yesterday's report in the morning returns 60–85% of the final numbers.
  • Late-arriving SKAN postbacks — iOS conversions attributed to campaigns that already ended weeks ago.
  • Retroactive adjustments — TikTok fraud-filters and dedup adjustments can revise yesterday's data downward by 2–8%.
  • Timezone ambiguity — reports default to account-level timezone, but cross-country roll-ups require explicit UTC alignment or you'll double-count the midnight hour.

The Fix: Build a 3-day settlement buffer into dashboards that feed pacing decisions. Run nightly re-pulls for a trailing 7-day window to catch retroactive adjustments. For agency client reports, standardize on Wednesday-for-prior-week cadence rather than Monday-for-prior-week.

6. Historical Data and the 18-Month Lookback Ceiling

The Problem: TikTok's reporting API caps historical data availability at roughly 18 months. If you need 24-month year-over-year comparisons, multi-year creative-performance trending, or 3-year cohort analysis — TikTok will simply not return the data. Attempts to query beyond the window return empty responses, not error codes, which means naive pipelines silently fill downstream tables with zeros.

Why it happens: TikTok is a younger platform and has deliberately scoped its reporting retention to 18 months to manage storage and query-performance costs on their side. Unlike Meta or Google, there's no extended-retention tier for enterprise accounts.

For agencies: Clients who migrated to your agency 14 months ago and want a "since we started with you" lookback are fine. Clients who want to show 2-year trend context in board decks are not. This is a structural constraint on storytelling, not a data-engineering problem.

Failure modes:

  • Silent empty responses beyond the 18-month boundary — pipelines write zeros into historical tables.
  • Quarter-over-quarter gaps when the 18-month window rolls forward and oldest quarters drop off.
  • No historical backfill path — once data is outside the window, it's gone from TikTok's side.
  • Attribution-window mismatches on edge dates — conversions attributed in month 19 that reference impressions in month 18 drop silently.

The Fix: Treat your own warehouse as the source of record for TikTok historical data. Extract daily-grain data into a warehouse table on an ongoing basis; never rely on TikTok's API for anything older than 15 months. Improvado's agentic data pipelines ship TikTok data with full historical retention in your warehouse — once it's landed, it's yours, regardless of what TikTok retains upstream.

7. Creative-Naming Inconsistency Breaks Cross-Channel Performance Views

The Problem: Creative performance analysis across channels requires a shared taxonomy — asset IDs or naming conventions that let you compare the same creative concept running on TikTok, Meta, and YouTube. TikTok's creative library doesn't enforce naming conventions, and asset IDs aren't portable across platforms. The result: your creative team names assets one way in Meta Business Manager and a different way in TikTok Ads Manager, and your cross-channel dashboard can't join them.

Why it happens: Each ad platform has its own asset-library model. TikTok uses video_id for uploaded assets but doesn't expose a stable cross-platform identifier. Manual naming conventions drift — one media buyer uses 2024_Q4_UGC_Creator-A_v2, another uses Q4_UGC_A_v2, a third uses UGC2_v2.

For agencies: A shared creative-naming taxonomy across clients — with enforced naming conventions, asset tagging rules, and cross-platform identity resolution — is one of the highest-leverage investments an agency can make. Every hour spent standardizing naming upstream saves ten hours of reconciliation downstream.

Failure modes:

  • Video-ID extraction bugs — production pipelines have encountered KeyError crashes on missing video_id fields when an ad references a placeholder asset.
  • Creative-library API rate-limit stacking — bulk-exporting creative metadata requires multiple endpoints with independent rate limits.
  • Asset-vs-ad-level confusion — TikTok reports some metrics at the ad level and others at the asset level, making it ambiguous which specific video drove performance.
  • A/B test data fragmentation — TikTok's native A/B testing splits data across test groups; exporting for external analysis requires reconstructing the test structure manually.

The Fix: Enforce a cross-platform creative-naming convention at the trafficking layer (in Asana, Airtable, or your creative ops tool) before assets are uploaded to any platform. In the warehouse, land TikTok's video_id alongside a normalized creative_concept_id dimension that joins against Meta and YouTube data — not directly on platform-native IDs.

8. Attribution Conflicts with Google Ads, Meta, and Your MMP

The Problem: TikTok is a self-attributing network (SAN) — it claims conversions independently, using its own attribution model (7-day click, 1-day view), against its own deduplication logic. Google Ads does the same with its own 30-day click window. Meta does the same with a customizable window. Your MMP (AppsFlyer, Adjust, Branch) does the same using its own last-touch logic. On any given purchase, up to four different platforms claim credit — and the sum of their claims exceeds your actual conversion volume by 20–60%.

Why it happens: Every ad platform's attribution engine is optimized to claim credit. Default attribution windows are self-serving. TikTok's 7-day click / 1-day view default makes TikTok look worse on longer consideration cycles; Google's 30-day click makes Google look better on the same journey. Neither is wrong — they're just incomparable at face value.

TikTok Ads attribution window vs Google Ads, Meta, and LinkedIn — 7-day click / 1-day view makes TikTok look structurally less efficient in cross-channel comparisons
TikTok's 7-day click / 1-day view default is structurally shorter than Meta, Google, and LinkedIn — normalize windows before comparing ROAS.

For agencies: When a client asks "Should we shift 20% of our Meta budget to TikTok?", the answer depends entirely on whether you normalized attribution windows before comparing. Clients who have been burned by a bad shift rarely trust the next recommendation without a written methodology note.

PlatformClick WindowView WindowiOS Handling
TikTok Ads7 days (default)1 day (fixed)SKAN 4.0 (tiered delays)
Meta Ads7 days (default)1 day (customizable)SKAN + Aggregated Events
Google Ads30 daysN/A (search)Consent Mode v2
LinkedIn Ads90 days7 daysLimited iOS impact

The Fix: Pick one attribution window as your "normalization baseline" (most analytics teams use 7-day click / 1-day view, matching TikTok's default) and rebase Meta, Google, and LinkedIn against it before comparing. Treat each platform's native report as a platform-specific view, not a cross-channel truth. Your MMP or an agentic analytics layer should produce the cross-channel view; individual ad platforms should not.

9. Audiences and Custom Targeting: API Exports Don't Match UI

The Problem: Custom audiences, lookalike audiences, and interest-targeting data exported via the TikTok API frequently don't match what's shown in the Ads Manager UI. Audience size estimates drift, lookalike seed data isn't exposed in full, and interest-category taxonomies change between API versions without corresponding UI changes.

Why it happens: TikTok's audience system has separate layers — a product-UI layer (what media buyers see), a reporting-API layer (what your pipeline pulls), and a targeting-engine layer (what actually serves). These three are loosely synchronized, and the API layer sometimes lags UI changes by weeks.

For agencies: When a client asks "Why does TikTok's reported audience size differ from what I see in Ads Manager?", the honest answer is "TikTok's UI shows a slightly different estimate than the API returns." That answer doesn't travel well in a QBR deck.

Failure modes:

  • Audience-size API drift — API-reported audience size can differ from UI-displayed size by 5–20% at any moment.
  • Lookalike seed opacity — lookalike audiences expose a seed_audience_id but not the underlying seed composition.
  • Interest-category renaming — interest taxonomies occasionally rename categories; historical targeting reports orphan against new taxonomies.
  • Custom-audience match-rate gaps — uploading a customer list to create a custom audience returns a match rate that drifts 10–30% from what you see in AppsFlyer / your CDP.

The Fix: Snapshot audience metadata on a scheduled cadence (weekly is typical) so you can track drift over time rather than being surprised by it. When audiences materially diverge between UI and API, file a support ticket — TikTok's support team can reconcile the backend views, but only if you can cite specific audience IDs and timestamps.

10. Managing TikTok Across Multiple Client Business Centers

The Problem: Agencies running TikTok for a portfolio of clients face a distinct operational layer: multiple Business Center accounts, shared creative libraries with strict permission boundaries, and cross-client reporting that must roll up cleanly without leaking data between brands. TikTok's Business Center was designed for a single agency managing a handful of brands — it does not scale gracefully to 20+ client accounts without an additional data layer.

Why it happens: TikTok's permission model is per-Business-Center-per-user. Asset-sharing rules between Business Centers require explicit configuration. Portfolio-level reporting is not a native capability — you export per-client and reconstruct the roll-up yourself.

Failure modes:

  • Business Center fragmentation — permissions, asset ownership, and billing scopes have to be configured per-client. A misconfigured permission breaks extraction silently.
  • Creative library sharing constraints — cross-client asset sharing exposes asset IDs but not always the ownership chain, making cross-client creative-reuse analysis fragile.
  • Cross-client roll-up gymnastics — portfolio views (spend pacing, creative fatigue, performance benchmarks) require per-client exports combined in the warehouse, not in TikTok's UI.
  • Data-isolation risk — reporting tools that mix accounts risk exposing one client's numbers in another client's dashboard. Isolation must be enforced at the data layer, not the dashboard layer.
  • Onboarding/offboarding churn — every new client means a new TikTok Business Center connection; every churned client means careful archival. Without automation, agency ops teams lose hours per transition.

For agencies: A managed multi-tenant data layer with per-client isolation, shared taxonomy, and portfolio roll-ups is the difference between an agency that scales past 20 TikTok clients and one that plateaus at 8. This is the single highest-leverage infrastructure investment for a growth-stage performance agency.

The Fix: Use a data platform that supports multi-tenant agency workspaces with per-client data isolation at the warehouse level, shared creative taxonomy across clients, and portfolio-level reporting views on top. Improvado's agentic data pipelines handle onboarding/offboarding in days not weeks, with full retention controls on client data archival.

The Unified TikTok Data Playbook

Ten problems, one underlying pattern: TikTok's Marketing API is powerful but volatile, and the gap between "raw API access" and "reliable cross-channel analytics" is larger than any other major ad platform. Here's the playbook teams running TikTok alongside Meta and Google at enterprise scale tend to converge on.

Cross-platform click-metric comparison — TikTok includes profile, music, and hashtag clicks, while Google Ads counts landing-page clicks only
TikTok and Google Ads define "click" differently — normalize metric definitions before comparing channels.

1. Treat your warehouse as the source of record

Never query TikTok's API directly for anything older than 15 months — the 18-month retention ceiling will eventually erase it. Land TikTok data daily into a warehouse with full historical retention, and run every cross-channel report against the warehouse, not the TikTok Ads Manager UI.

2. Normalize attribution before comparing

Pick a 7-day click / 1-day view baseline (TikTok's default) and rebase Meta and Google against it before any ROAS comparison. Treat each platform's native attribution as platform-specific; your cross-channel ROAS view lives in the warehouse, not in any single ad platform.

3. Instrument server-side conversion tracking as the primary source

Events API first, pixel second. Proper event_id deduplication. Nightly reconciliation against your CRM or e-commerce platform. Teams who do this recover 15–30% of previously invisible conversions within a month.

4. Budget a 3-day data-settlement window into pacing decisions

Don't make Monday budget shifts based on weekend data. Run nightly re-pulls for a trailing 7-day window. For client-facing reports, standardize on Wednesday-for-prior-week, not Monday.

5. Enforce creative-naming conventions upstream

At the trafficking layer, not the reporting layer. Land video_id alongside a normalized creative_concept_id so cross-channel creative-performance views are possible at all.

6. Isolate client data at the warehouse, not the dashboard

For agencies, per-client data isolation has to happen below the dashboard layer. A misconfigured dashboard filter is a data-exposure incident; warehouse-level isolation is infrastructure.

7. Use a managed connector layer for the pipeline itself

Custom TikTok API integrations are a tax you pay every month. The payoff for using a managed agentic data pipeline isn't just the first build — it's the silent cost elimination of every API change, rate-limit shift, enum rename, and silent-empty-payload event for the life of the channel.

Get a TikTok data integration audit
30 minutes with an Improvado solutions architect — walk through your current TikTok extraction setup, where it's losing data, and what a managed pipeline would replace. No prep required; we'll review your schema live.

Querying TikTok Ads Data with AI Agents via Improvado MCP

Beyond traditional data pipelines, enterprise teams are running TikTok analysis through AI agents connected to Improvado's MCP (Model Context Protocol) server. The advantage over a native TikTok integration is that one prompt can cross channels — TikTok + Meta + Google + CRM in a single question.

Ready-to-use MCP prompts

Creative-performance analysis:

Show me TikTok creative performance by video asset for the last 30 days.
Rank by conversion rate and flag creatives with completion rate below 25%.

Cross-platform budget optimization:

Compare TikTok, Meta, and Google ROAS for the last 90 days
using a normalized 7-day click / 1-day view window. Where should I shift budget?

iOS attribution reconciliation:

Compare TikTok's reported iOS conversions (including modeled)
against our server-side conversion events for the last 30 days. What's the gap?

Connecting TikTok data to your AI agent

Improvado provides an MCP-compatible endpoint for enterprise customers. Once onboarded, you receive an MCP endpoint URL scoped to your workspace and a bearer token. Add the server to your Claude Code, Cursor, or ChatGPT config:

{
 "improvado": {
   "type": "streamable-http",
   "url": "https://mcp.improvado.io/v1/your-workspace",
   "headers": {
     "Authorization": "Bearer your-api-token"
   }
 }
}

Then ask, in plain English:

> Show me my top TikTok campaigns by ROAS this month, normalized against Meta.

Frequently Asked Questions

Why does TikTok show different conversion numbers than GA4?

TikTok uses a 7-day click / 1-day view attribution window by default; GA4 uses data-driven attribution across a different lookback. TikTok is a self-attributing network, meaning it claims conversions independently rather than waiting for GA4's click-based tracking. iOS ATT restrictions further widen the gap — typical divergence is 20–40%.

How far back does TikTok's Marketing API let me query data?

Roughly 18 months. Beyond that, the API returns empty responses, not errors. If you need 24-month or multi-year analysis, you have to land data in your own warehouse on an ongoing basis — once it's outside TikTok's retention window, it's gone from the source.

How does SKAN 4.0 affect my TikTok Ads data?

SKAN 4.0 introduces three tiered attribution windows (0–2 days, 3–7 days, 8–35 days), so iOS conversion data arrives in stages over weeks rather than within 24 hours. Avoid significant campaign adjustments during the first 7 days of a cohort to let SKAN data stabilize. TikTok supplements SKAN with modeled conversions, but those are estimates with methodology that isn't fully transparent.

Should I implement TikTok's Events API alongside the pixel?

Yes. Pixel-only setups miss 20–40% of conversions due to cookie restrictions and ad blockers. Running pixel plus Events API with proper event_id deduplication gives you the most complete conversion picture. Events API implementation requires engineering resources, but the data improvement is substantial — typically 15–25% conversion recovery.

Can I compare TikTok performance directly against Meta or Google?

Not using native platform reports — each platform uses different metric definitions, attribution windows, and counting methodologies. You need a normalization layer that rebases attribution windows and aligns metric definitions before cross-channel comparison is meaningful.

What are TikTok's rate limits, and how do I stay under them?

Reporting endpoints run around 600 requests per minute per token, with 100-items-per-page caps. Large historical backfills hit limits first. Use adaptive backoff at the framework level, separate app tokens by client tier (for agencies), and spread backfills across off-peak windows.

How do I handle TikTok's province / region IDs in cross-channel reports?

Maintain a versioned geo-dimension table keyed on TikTok province ID, with effective_from / effective_to columns. TikTok occasionally renames or splits region IDs between API versions; without version control on the dimension, historical reports will orphan-join.

Why does my TikTok Business Center show different numbers than my API pull?

Most commonly: the UI defaults to a different attribution window or timezone than the API query. Confirm both are using the same attribution_window parameter and the same timezone (UTC is safest for cross-country roll-ups). Secondary causes: partial prior-day data and SKAN postback timing.

How should agencies structure TikTok Business Center access for 20+ clients?

Per-client Business Center with explicit per-user permission scoping; separate app tokens by client tier to avoid rate-limit starvation; warehouse-level data isolation enforced below the dashboard layer; a shared creative-naming taxonomy enforced at the trafficking layer.

Is the Events API hard to implement?

It's a server-side implementation, so it requires engineering resources — not a front-end tag. Typical first implementation takes 1–3 weeks. Ongoing maintenance is minimal once deduplication and hashing are correctly configured. The conversion-recovery payoff typically justifies the build within the first month.

How do I detect silent extraction failures on TikTok?

Row-count monitoring per day per advertiser_id — if yesterday's pull returns fewer than 50% of the prior 7-day average, trigger an alert. HTTP 200 with data: [] is a common failure signature that naive pipelines miss.

What's the difference between Improvado MCP and TikTok's Reporting API?

TikTok's Reporting API has restrictive rate limits, requires technical implementation, and returns only TikTok data. Improvado's MCP endpoint combines TikTok with Meta, Google, LinkedIn, your CRM, and your e-commerce data — you ask questions in plain English and get cross-platform answers against a normalized schema, in days not weeks from onboarding.

Does TikTok expose postal-code-level geo reporting?

No. TikTok's geographic reporting maxes out at the province/region level. If you need postal-code resolution, you'll need to complement TikTok data with first-party purchase-location data from your e-commerce platform or CRM.

What about creative fatigue detection on TikTok?

TikTok doesn't expose a native creative-fatigue signal the way Meta does. You build it in the warehouse: trailing-7-day CPA divided by trailing-30-day CPA per creative concept — once the ratio exceeds 1.3, the creative is in fatigue.

Is TikTok data safe to include in public dashboards?

For agencies managing multiple clients, the primary risk is cross-client exposure. Enforce data isolation at the warehouse level, not at the dashboard filter level — a misconfigured filter is a data-exposure incident; warehouse-level row-level security is infrastructure.

Book a cross-platform attribution review
If you're reconciling TikTok against Meta and Google every week, we'll map a normalized attribution baseline for your stack in 45 minutes. With an Improvado solutions architect and an analytics engineer on the call. 1000+ connectors, normalized schema, in days not weeks.

FAQ

⚡️ Pro tip

"While Improvado doesn't directly adjust audience settings, it supports audience expansion by providing the tools you need to analyze and refine performance across platforms:

1

Consistent UTMs: Larger audiences often span multiple platforms. Improvado ensures consistent UTM monitoring, enabling you to gather detailed performance data from Instagram, Facebook, LinkedIn, and beyond.

2

Cross-platform data integration: With larger audiences spread across platforms, consolidating performance metrics becomes essential. Improvado unifies this data and makes it easier to spot trends and opportunities.

3

Actionable insights: Improvado analyzes your campaigns, identifying the most effective combinations of audience, banner, message, offer, and landing page. These insights help you build high-performing, lead-generating combinations.

With Improvado, you can streamline audience testing, refine your messaging, and identify the combinations that generate the best results. Once you've found your "winning formula," you can scale confidently and repeat the process to discover new high-performing formulas."

VP of Product at Improvado
This is some text inside of a div block
Description
Learn more
UTM Mastery: Advanced UTM Practices for Precise Marketing Attribution
Download
Unshackling Marketing Insights With Advanced UTM Practices
Download
Craft marketing dashboards with ChatGPT
Harness the AI Power of ChatGPT to Elevate Your Marketing Efforts
Download

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.