Marketing has a measurement problem. Budgets are larger than ever, channels are more fragmented than ever, and yet most marketers are still making decisions based on models built for a simpler world. We sat down with Professor Ron Berman of the Wharton School at the University of Pennsylvania — one of the leading academic voices in marketing science — to cut through the noise on three of the most debated tools in the field: Multi-Touch Attribution (MTA), Marketing Mix Modeling (MMM), and A/B Testing.
Before diving into the tools, Prof. Berman framed the fundamental tension every marketing leader faces today. Modern customer journeys span dozens of touchpoints — a Google search, a LinkedIn ad, a podcast mention, a retargeted display ad, a direct visit. Each channel wants credit. Each vendor claims their platform drove the conversion. The result is a measurement ecosystem where the numbers often add up to more than 100% — a statistical impossibility that signals something is deeply broken."The question isn't which model is right," Prof. Berman noted. "It's which model is right for your business — and most companies haven't asked that question carefully enough."
MTA attempts to assign fractional credit to each touchpoint in a customer's path to conversion. Last-click, first-click, linear, time-decay, data-driven — the variations are endless, and the debates around them are fierce.
Where MTA works well:
Where MTA breaks down:
Prof. Berman was direct on this point: MTA is a useful operational tool, but it should never be the sole basis for budget allocation decisions. Over-relying on it systematically under-funds upper-funnel and brand channels — which don't show up cleanly in click paths — and over-rewards retargeting, which often takes credit for conversions that would have happened anyway.
MMM, once considered outdated, has experienced a dramatic renaissance — driven in part by the privacy-driven collapse of user-level tracking and in part by major platforms like Google and Meta releasing their own open-source MMM frameworks (Meridian and Robyn, respectively).MMM takes an aggregate, econometric approach. It uses historical spend and sales data across channels to statistically estimate the contribution of each marketing activity, while controlling for external factors like seasonality, pricing, and macroeconomic conditions.
MMM's key strengths:
The honest limitations:
The key insight from Prof. Berman: MMM tells you what should work at a strategic level. It is a budget planning tool, not an execution tool. Companies that use it to set channel mix and then use MTA to optimize within channels are getting the best of both worlds.
If MMM is the strategist and MTA is the tactician, A/B testing is the scientist. Randomized controlled experiments are the only methodology that can establish true causal lift — not just correlation.
When A/B testing is the right call:
The real-world constraints Prof. Berman raised:
The practical takeaway: run experiments where you can, but be honest about where you cannot. An untested assumption fed into an MMM is not automatically worse than a noisy A/B result from an underpowered test.
Prof. Berman closed with a note that felt especially relevant for the B2B and education marketing space: measurement maturity is a competitive advantage. As privacy regulations tighten and third-party signals continue to erode, organizations that have invested in first-party data infrastructure, clean MMM inputs, and a culture of experimentation will have a structural edge over those still relying on platform-reported ROAS. The roadmap is not complicated — but it does require commitment: