→ Click here for the Full Interactive Technical Dashboard
Yes. The campaign generated $259,324 in incremental revenue, giving us a 159% ROI. The effect is statistically significant (p < 0.001). Every dollar we invested returned $2.59 in revenue.
| Market | Inc. Revenue | Lift % | Daily Lift | 95% CI |
|---|---|---|---|---|
| San Francisco | $60,333 | +15.1% | $1,946/day | [$58,027 – $62,591] |
| Austin | $53,423 | +16.5% | $1,723/day | [$51,472 – $55,490] |
| Miami | $38,875 | +11.0% | $1,254/day | [$36,940 – $40,821] |
| Chicago | $58,044 | +15.7% | $1,872/day | [$55,780 – $60,364] |
| Dallas | $48,649 | +14.8% | $1,569/day | [$46,772 – $50,615] |
| Total | $259,324 | +14.6% | $1,673/day | [$248,992 – $269,880] |
We matched 5 test cities (which received the campaign) with 5 similar control cities (no campaign). The control cities act as the counterfactual - basically, what would have happened if we hadn't run the ad.
| Test City | Control Twin | Pre-Period Correlation |
|---|---|---|
| San Francisco | Seattle | r = 0.994 |
| Austin | Denver | r = 0.992 |
| Miami | Phoenix | r = 0.993 |
| Chicago | Boston | r = 0.991 |
| Dallas | Atlanta | r = 0.992 |
I used two independent methods to make sure the results hold up:
| Method | Approach | Result |
|---|---|---|
| Diff-in-Diff | OLS regression with HC1 robust SEs | $1,713/day (p = 1.95 × 10-23) |
| CausalImpact | Bayesian counterfactual + bootstrap CIs | $259,324 cumulative |
Both methods agree, which gives us strong confidence in the result.
With 159% ROI and 2.59x ROAS (well above industry average of 2-4x), increase budget for the next seasonal push, particularly in San Francisco and Chicago.
Miami showed the lowest relative lift (+11.0%) but still generated $38,875. Test alternative creatives or channels before the next cycle.
Apply the same twin-city approach to email, social, and CTV campaigns to measure incrementality across all channels.
Instead of one-off studies, set up continuous measurement so we can shift budget in real time based on what's working.
| Synthetic Data | This is a portfolio demo using simulated data. A real analysis would need actual POS/transaction data. |
| Single Period | Only 31 days measured. Longer-term effects like brand lift or customer lifetime value aren't captured here. |
| Confounders | Local factors we can't see (competitor promos, weather, etc.) could have some effect on the numbers. |
| No Channel Split | We measured the total campaign effect. We can't tell which specific ad channel drove the most lift. |