Methodology · April 2026
How a follow-up test turned a 48% CR loss into six figures of additional revenue
When a CRO test loses 48%, the easy lesson is "that idea was wrong." Sometimes it isn't. A good hypothesis can survive a bad execution, and the page you placed the test on is part of the execution.
A few months back, working with a D2C ecommerce client, we ran two tests with the same underlying goal: help the buyer figure out which size of one product line fit their use case. The product line had three sizes. Customer research had been telling us for months that buyers couldn't tell which one was theirs. Support tickets were full of "which one fits?" questions. Returns traced back to fit issues. The problem was real.
We tried solving it twice, in two different places. One test ran on the product detail page, where the buyer was about to pick a variant. The other ran on the category page, two clicks earlier, where the buyer was still deciding what to look at.
The PDP test lost 48% on conversion. We killed it inside a week.
The category page test added five figures in recurring revenue per month. It paid back the entire CRO engagement nine times over in year one.
Same conceptual fix. Opposite results.
Same conceptual fix, two different surfaces. The only thing that changed was where the test was placed.
The PDP test (the loss)
The PDP test added a "Which size fits me?" link next to the variant selector. Click the link, get a clean modal with a visual size guide: photos, fit ranges, a CTA that pre-selected the correct variant before closing the modal. Standard pattern. The kind of thing fashion ecommerce has been doing well for ten years.
Add-to-cart rate dropped. Conversion rate dropped further. By the end of week one we'd seen enough.
What actually happened, when you watched the session recordings: the modal was working as designed. Buyers opened it. They scrolled through it. They sometimes used it. And then they either bounced, took the question to a competitor's site for a sanity check, or sat on the page longer without buying.
The modal didn't fail because the content was wrong. It failed because the interactive nature of the fix re-opened a question the buyer had already settled. We turned a confident click into "wait, am I sure?"
The category page test (the win)
The same buyer question got answered on the category page, two clicks earlier in the funnel.
We duplicated the single product card into three, one for each variant. Each one named the variant and tagged the use case it was built for, so the matching card stood out at a glance. The buyer didn't have to click anything to get the answer. The card on the category page already told them which variant fit their use case before they opened the PDP.
The category page change: one card became three, each labelled with the use case it fit.
Add-to-cart rate moved a little. Average order value moved more, because the labelling shifted buyers into the higher-AOV variant they'd been undershooting on. Revenue per visitor came in around +7%.
What we trusted wasn't the RPV number on its own. We saw a 33% increase in orders for the higher-priced variant at 98% confidence. The client's sister product range was already laid out exactly this way on the category page, split into labelled cards by use case, with no interactive aids. That had been the established pattern for years. The test was extending an approach that already worked on one product line, to a product line that hadn't been treated the same way. The data wasn't surfacing a new finding. It was reproducing a known one.
Why the page mattered more than the answer
Two things made the same fix work on one page and break on another.
1. Interactive elements near the buy button cost confidence.
A buyer reading product specs and clicking Add-to-Cart is in a momentum state. Anything that asks them to check something (a modal, a tooltip, a quiz, a guide) interrupts the momentum and re-opens the decision. Most of the time, the buyer doesn't return to the same level of confidence they had before they opened the modal. You've replaced "I'm buying this one" with "let me make sure," and "let me make sure" is a state customers leave the site to resolve.
The PDP loss wasn't about where the size guide was. It was about what the size guide was. An interactive opt-in question, sitting next to the variant selector, told the buyer they might be wrong. Even buyers who'd already made up their mind paused to consider whether to second-guess themselves. That hesitation cost half the conversion rate.
2. In this case, the sizing decision was being made before the PDP, not on it.
Buyers landing on the PDP for this product weren't using the page to decide which size they wanted. They were using it to confirm a decision they'd already made earlier in their session: by clicking through from a category page, by reading reviews, by talking to support. When the modal appeared on the PDP and asked them to re-open the sizing question, it was rewinding work they'd already done. The category page is where the decision was being formed. The fix belonged there.
The short version
If your test failed near the buy button, don't kill the idea. Move it upstream and try it as a label. Most of the time, the buyer didn't disagree with what you said. They disagreed with where you said it.
The follow-up test, run in a different place, is the difference between burying a 48% loss as "that didn't work" and turning it into six figures of annualised profit.
If your last few PDP tests lost and you're not sure why, that's the place to look first. Book a 15-minute call and we can pull the funnel-level numbers together.