Afterwake logo

Data Analytics

The metrics that used to run DTC are now lying to you

Author

Ernests Krafts

Published

The metrics that used to run DTC are now lying to you

There's a specific kind of confidence that comes from watching your ROAS hold at 4x for three straight months. It feels like proof. It feels like you know what you're doing.

Then you run a geo holdout test and find out the real number is closer to 2x. The rest of those conversions were going to happen anyway — people who already knew your brand, already had the tab open, already had the product in their cart. You weren't driving them. You were just standing in their way and claiming the credit.

How the core metrics broke

ROAS didn't break because marketers stopped caring. It broke because the infrastructure underneath it quietly fell apart. After iOS 14.5, attribution accuracy dropped by as much as 70% for many brands. Meta and Google responded by filling the gaps with modeled data — which is a polite way of saying educated guesses that tend to flatter the platform doing the guessing.

CAC has its own problem. It's a lagging indicator disguised as a leading one. You spend in January, the customer buys in March after seeing your brand six more times in places you can't track — a Reddit thread, a group chat, a TikTok save. By the time CAC shows up in your dashboard, it's already a historical artifact, not a signal you can act on. And with ecommerce acquisition costs up 40–60% over the last two years, a lot of teams are optimizing a number that's both inflated and late.

Conversion rate has gotten stranger too. More purchase decisions are now starting inside AI tools — ChatGPT, Perplexity, Google's AI overviews — which means a growing share of your buyers arrive already convinced, having never touched a tracked surface. Your conversion rate looks great. Your attribution model has no idea why.

Platform-reported metrics haven't just become less accurate. They've become actively misleading — optimized to justify the budget you're already spending, not to tell you whether it's working.

What's replacing them

The growth marketers who are making good decisions right now have quietly shifted to a different set of questions. Not "what's our ROAS?" but "what's our incrementality?" Not "what did it cost to acquire this customer?" but "how long until we get that money back, and from which cohort?"

Incrementality testing.

Geo holdout tests — where you turn off spend in a control region and watch what happens to revenue — are the closest thing to a ground truth most DTC brands can access right now. Tools like Northbeam and Haus have made this more accessible, but you don't need a six-figure analytics contract to run a basic test. What you need is the willingness to see a number that might be uncomfortable. Most brands run one test, don't like the results, and go back to platform reporting. The ones that keep testing eventually get much better at spending.

Contribution margin over ROAS.

ROAS tells you revenue relative to ad spend. Contribution margin tells you what actually landed in the bank after COGS, shipping, discounts, and returns. These two numbers can tell completely opposite stories, especially after a promo. Tools like Triple Whale and Polar have been pushing hard on this — building margin into the dashboard rather than treating it as a Finance problem. If your analytics stack still can't show you margin by channel, that's the gap worth closing first.

Payback period by cohort.

CAC as a single number is close to useless at this point. What matters is how fast different acquisition cohorts pay back, and whether that payback period is getting longer or shorter over time. A brand spending $90 to acquire a customer who returns three times in 90 days is in a completely different position from one spending $60 to acquire a customer who buys once and never comes back. The blended CAC looks better in the second case. The business is worse.

Creative is now a measurement problem

Here's the shift that catches most growth marketers off guard: with Meta's Advantage+ and broad targeting now doing most of the audience selection work, the creative itself has become the primary targeting signal. The algorithm figures out who to show your ad to based on what's in the ad. Which means creative performance analytics is no longer a nice-to-have sitting in someone else's team — it's a core measurement layer.

The brands scaling efficiently right now are tracking hook rate (how many people watch the first three seconds), thumb-stop ratio, and creative-level incrementality — not just CTR and CPM. They're running 30, 40, 50 creative variations a month and treating the data as a feedback loop, not a report. If your analytics stack can tell you which channel drove revenue but not which specific ad inside that channel did the work, you're missing half the picture.

The honest version of your dashboard

None of this means you should throw out ROAS or stop tracking CAC. These metrics still have uses — they're just not ground truth anymore, and treating them as such is how you end up confidently scaling the wrong thing.

The practical move is to build a two-layer view: your daily operating metrics (blended MER, channel-level spend, creative performance) for the decisions you make every week, and your validation layer (incrementality tests, cohort payback, contribution margin) for the decisions you make every quarter. The first layer tells you what's moving. The second layer tells you if any of it actually matters.

Most DTC teams have the first layer. Very few have the second. And right now, with CAC up, attribution broken, and AI discovery eating the top of the funnel, the brands that build it are going to look like they have some kind of unfair advantage. They don't. They just stopped trusting numbers that were never quite telling the truth.