The Lookback-Window Mismatch and Why It Breaks Blended CAC Optimization
Jamie

Why blended CAC looks “fine” while budgets drift
Blended CAC is supposed to be the sanity check: one number that tells you whether growth is efficient across channels. But it quietly breaks when each platform is allowed to “decide” what a conversion belongs to.
The most common culprit is the lookback-window mismatch. Google Ads, Meta, LinkedIn, your analytics tool, and your CRM can all use different attribution windows by default (and they can apply different logic for view-through vs click-through). When you blend those outputs without normalizing the time window, you don’t get a blended CAC. You get a blended artifact—one that can mislead budget optimization even when every individual dashboard looks reasonable.
What a lookback window really controls
A lookback window is the period after an ad interaction during which a conversion can be credited to that ad. It answers: “How long do we keep giving this touchpoint a chance to claim the outcome?”
Two implications matter for blended CAC:
- Timing shifts. Conversions can be “pulled” into a reporting period even if the spend happened earlier.
- Claim scope varies. Some windows are more permissive (more conversions claimed), others stricter (fewer claimed).
If you compare channels where one platform is still claiming conversions 28 days after a click while another only claims 7, you aren’t comparing efficiency. You’re comparing policy.
The mismatch pattern that breaks budget decisions
1) Spend is real-time, conversions are not
Budgets typically operate on daily or weekly feedback loops. Lookback windows operate on delayed credit assignment. That means the “ROI signal” your optimizer sees can lag—by design—and the lag differs by channel.
In practice, you’ll see patterns like:
- Channel A looks great this week because it’s still claiming conversions triggered by last week’s spend.
- Channel B looks weak this week because it stops claiming sooner, even if it’s driving comparable downstream pipeline.
Blended CAC can stay stable while channel-level budgets swing in the wrong direction.
2) View-through makes the window problem louder
When view-through attribution is enabled (or included in default reporting), the window mismatch becomes even more pronounced. A view-through window tends to credit impressions that are harder to validate against downstream intent, especially if your conversion event is high in the funnel (lead form submit, “book a demo,” trial start).
The result is not necessarily “bad” data—just data that’s not comparable unless you explicitly align definitions across sources.
3) Platform-reported CAC becomes a moving target
Even if you never change budgets, platform CAC can drift simply because the cohort of conversions being credited changes over time as windows mature. Early in the week, you see a partial picture; later, more conversions appear inside the attribution window and retroactively improve CAC. This can reward channels with longer windows and penalize those with shorter ones—before the underlying business outcome is even clear.
A concrete example of how blended CAC gets distorted
Imagine you spend $10,000 per week on two channels.
- Meta reports with a longer click/view lookback, so it keeps attributing conversions to last week’s spend while you’re already evaluating this week’s budget.
- Search reports with a shorter click lookback, so it mostly attributes conversions quickly and then stops.
If you pull both into a single blended CAC dashboard without harmonizing windows, Meta’s CAC can look artificially strong “this week” (it’s receiving late credit), while Search looks weaker (its credit has already matured). A budget optimizer may shift spend to Meta because the reported marginal efficiency appears higher—when the shift is partially explained by attribution timing, not performance.
Symptoms you can spot without changing any tooling
You don’t need a new attribution model to recognize the mismatch. Look for these operational tells:
- Retroactive improvements. CAC for certain channels consistently “gets better” 7–30 days after period close.
- Unexplained channel divergence. One platform’s conversion volume rises while CRM-qualified volume stays flat.
- Budget whiplash. Weekly reallocations correlate more with reporting windows than with pipeline velocity.
- Inconsistent naming worsens it. If campaign naming is messy, you can’t even isolate which slices are affected. (If this sounds familiar, the concept of a “UTM tax” is real: inconsistent taxonomy makes every normalization step harder.)
How to fix it without pretending attribution is perfect
Normalize to a single decision window
Pick one window that matches how your business decides. For many teams, that’s a 7-day or 14-day click window for tactical budget moves, with a longer window used for monthly evaluation. The key is consistency: every channel should be comparable inside the same window, even if you keep longer-window reporting as a secondary view.
This doesn’t require “winning” the attribution debate. It’s a governance choice: decide what timeframe your optimizer is allowed to use.
Separate reporting views for different questions
- Optimization view: short, consistent lookback; minimal view-through; aligned conversion event.
- Investment view: longer lookback; cohort-based read; includes delayed outcomes like SQLs or revenue.
When teams mix these views into one KPI, they end up with a CAC number that answers no specific question well.
Use CRM outcomes as the tie-breaker metric
Platform conversion counts are helpful, but pipeline and revenue are what you ultimately buy. When there’s disagreement between channels, use CRM stages (MQL, SQL, opportunities, revenue) to validate whether the platform with “better CAC” is actually producing better business outcomes—or simply claiming more credit inside a longer window.
This is also where disciplined definitions matter: a “lead” event can be too noisy to arbitrate budget decisions, especially when view-through is in play.
Make your data pipeline enforce the rules
The most durable fix is to stop relying on each platform’s UI as your source of truth. Instead, collect the raw performance data, standardize naming, align windows, and compute KPIs in one governed layer—then feed that to dashboards and budget tooling.
This is the kind of job marketing data infrastructure platforms are built for. With Funnel.io, teams can continuously pull data from ad platforms, analytics, and CRMs, apply transformations like naming harmonization and KPI calculations, and deliver an analysis-ready dataset where attribution-window assumptions are explicit and consistent.
Where internal process usually fails
Even with the right model, teams get stuck because the work is distributed across marketing ops, analytics, and finance—and no one owns the “blended CAC contract.” Two common friction points:
- Inconsistent taxonomy. Without a single naming standard, you can’t reliably align windows at the right granularity. The fastest way to reduce errors is to fix campaign naming at the source and enforce it downstream (see the article on the UTM tax and the fix for inconsistent campaign naming).
- Invisible backlog of measurement requests. Attribution debates often create duplicate, slightly different asks across teams (“Can we see 7-day?”, “Can we exclude view-through?”, “Can we break out by pipeline stage?”). Left unmanaged, that becomes a measurement versioning problem (related to feedback debt and duplicate requests).
The practical outcome: fewer false positives in “winner” channels
Once lookback windows are aligned, two things happen quickly:
- Your blended CAC becomes stable for the same reason it changes—real performance, not shifting credit.
- Budget optimization improves because channels compete on comparable rules, and late-arriving conversions stop distorting short-term decision cycles.
The goal isn’t a perfect attribution model. It’s to prevent a silent mismatch from turning your blended CAC into a number that looks precise but optimizes the wrong thing.


